Generate commit messages from your staged changes using a local LLM via Ollama. No API keys, no cloud — everything runs on your machine.
commit-msg-ai requires Ollama to run language models locally. Install it first:
macOS:
brew install ollamaLinux:
curl -fsSL https://ollama.com/install.sh | shWindows: Download the installer from ollama.com/download.
Once installed, start the Ollama server:
ollama serveOn macOS, Ollama runs automatically in the background after installation. You can skip this step if you see the Ollama icon in your menu bar.
You need at least one model downloaded. See what's available on your machine:
ollama listIf the list is empty, pull a model. Some good options for commit message generation:
# Lightweight and fast (~2GB)
ollama pull llama3.2
# Good for code understanding (~4.7GB)
ollama pull qwen2.5-coder
# Small and capable (~2.3GB)
ollama pull mistralYou can browse all available models at ollama.com/library.
pip install commit-msg-aiBy default commit-msg-ai uses llama3.2. If you pulled a different model, set it as default:
commit-msg-ai config model qwen2.5-coderVerify your config:
commit-msg-ai configgit add .
commit-msg-aiStaged files:
M src/auth.py
A src/middleware.py
Generating commit message with qwen2.5-coder...
──────────────────────────────────────────────────
feat: add JWT authentication middleware
──────────────────────────────────────────────────
Commit with this message? [Y/n] y
[main 3a1b2c3] feat: add JWT authentication middleware
2 files changed, 45 insertions(+), 3 deletions(-)
That's it.
commit-msg-ai stores config in ~/.config/commit-msg-ai/config.json.
# Set default model
commit-msg-ai config model mistral
# Set Ollama server URL (useful for remote setups)
commit-msg-ai config url http://192.168.1.50:11434
# View all config
commit-msg-ai config
# View a single value
commit-msg-ai config modelOverride any config for a single run with flags:
commit-msg-ai --model codellama
commit-msg-ai --url http://other-server:11434commit-msg-ai generates messages with only three prefixes:
feat:new featuresfix:bug fixesbc:breaking changes
- Python 3.9+
- Ollama running locally (or on a reachable server)
- At least one model pulled (
ollama pull llama3.2)