This is a simple Docker Compose setup for running aider with local LLM instances via the Ollama framework.
Both components already provide official Docker images. This setup makes handling them in parallel a tiny bit more convenient, mainly by putting command line arguments into a configuration file and modifying the entrypoints and commands of the original images.
- Clone this repository.
- Copy the
.env.examplefile to.envand adjust the values to your needs.MODELtakes a string with a model name that works with Ollama, e.g.,deepseek-coder-v2orllama3.1:8b(the parameter size, if available, is part of this string).PROJECT_PATHshould be the absolute path to the project folder you want to use with aider.
- Run
docker compose up -dto start the application. - To pull a model, run
docker exec -it ao-llm ollama pull { model name }. You only need to do this once per model you are using (see Volumes below). - To start aider, run
docker exec -it ao-aider aider.
Heads up: with this setup, you always need to start the aider/Ollama application from it's own local repository, not the project repository (unlike the freestanding aider utility container). aider will mount the folder you specifiy in the PROJECT_PATH environment variable as application folder.
When pulling a model for the first time, the ao-llm service will create a volume named llm where all Ollama models are stored.
Adjust this setup according to your needs.
In case you haven't done yet, read the aider and Ollama docs.