Ollama
This service deploys Ollama for running local LLM models.
Usage
-
Pull DeepSeek R1 7B model:
docker exec -it ollama ollama pull deepseek-r1:7b -
List all local models:
docker exec -it ollama ollama list -
Get all local models via API:
curl http://localhost:11434/api/tags 2> /dev/null | jq
Services
ollama: The Ollama service.
Configuration
OLLAMA_VERSION: The version of the Ollama image, default is0.12.0.OLLAMA_PORT_OVERRIDE: The host port for Ollama, default is11434.
Volumes
ollama_models: A volume for storing Ollama models.