Generate a commit message from your staged changes using OpenAI, Google Gemini, Ollama, or llama.cpp.
- Python 3.13+
- A Git repo with staged changes (
git add ...) (or use--amendeven if nothing is staged)
Install the latest released version from PyPI:
# User environment (recommended)
python -m pip install --user git-commit-message
# Or system/virtualenv as appropriate
python -m pip install git-commit-message
# Or with pipx for isolated CLI installs
pipx install git-commit-message
# Upgrade to the newest version
python -m pip install --upgrade git-commit-messageQuick check:
git-commit-message --helpexport OPENAI_API_KEY="sk-..."export GOOGLE_API_KEY="..."- Install Ollama: https://ollama.ai
- Start the server:
ollama serve- Pull a model:
ollama pull mistralOptional: set defaults:
export GIT_COMMIT_MESSAGE_PROVIDER=ollama
export OLLAMA_MODEL=mistral- Build and run llama.cpp server with your model:
llama-server -hf ggml-org/gpt-oss-20b-GGUF --host 0.0.0.0 --port 8080- The server runs on
http://localhost:8080by default.
Optional: set defaults:
export GIT_COMMIT_MESSAGE_PROVIDER=llamacpp
export LLAMACPP_HOST=http://localhost:8080Note (fish):
set -x OPENAI_API_KEY "sk-..."python -m pip install -e .Generate and print a commit message:
git add -A
git-commit-message "optional extra context about the change"Generate a single-line subject only (when no trailers are appended):
git-commit-message --one-line "optional context"
# with trailers, output is subject plus trailer lines
git-commit-message --one-line --co-author 'John Doe <john.doe@example.com>'Select provider:
# OpenAI (default)
git-commit-message --provider openai
# Google Gemini (via google-genai)
git-commit-message --provider google
# Ollama
git-commit-message --provider ollama
# llama.cpp
git-commit-message --provider llamacppCommit immediately (optionally open editor):
git-commit-message --commit "refactor parser for speed"
git-commit-message --commit --edit "refactor parser for speed"
# add co-author trailers
git-commit-message --commit --co-author 'John Doe <john.doe@example.com>'
git-commit-message --commit --co-author 'John Doe <john.doe@example.com>' --co-author 'Jane Doe <jane.doe@example.com>'
git-commit-message --commit --co-author copilotAmend the previous commit:
# print only (useful for pasting into a GUI editor)
git-commit-message --amend "optional context"
# amend immediately
git-commit-message --commit --amend "optional context"
# amend immediately, but open editor for final tweaks
git-commit-message --commit --amend --edit "optional context"Limit subject length:
git-commit-message --one-line --max-length 50Chunk/summarise long diffs by token budget:
# force a single summary pass over the whole diff (default)
git-commit-message --chunk-tokens 0
# chunk the diff into ~4000-token pieces before summarising
git-commit-message --chunk-tokens 4000
# disable summarisation and use the legacy one-shot prompt
git-commit-message --chunk-tokens -1Select output language/locale (IETF language tag):
git-commit-message --language en-US
git-commit-message --language ko-KR
git-commit-message --language ja-JPPrint debug info:
git-commit-message --debugConfigure Ollama host (if running on a different machine):
git-commit-message --provider ollama --host http://192.168.1.100:11434Configure llama.cpp host:
git-commit-message --provider llamacpp --host http://192.168.1.100:8080--provider {openai,google,ollama,llamacpp}: provider to use (default:openai)--model MODEL: model override (provider-specific; ignored for llama.cpp)--language TAG: output language/locale (default:en-GB)--one-line: output subject only when no trailers are appended; with--co-author, output is a single-line subject plusCo-authored-by:trailer lines--max-length N: max subject length (default: 72)--chunk-tokens N: token budget per diff chunk (0= single summary pass,-1disables summarisation)--debug: print request/response details--commit: rungit commit -m <message>--amend: generate a message suitable for amending the previous commit (diff is from the amended commit's parent to the staged index; if nothing is staged, this effectively becomes the diff introduced byHEAD)--edit: with--commit, open editor for final message--host URL: host URL for providers like Ollama or llama.cpp (default:http://localhost:11434for Ollama,http://localhost:8080for llama.cpp)--co-author VALUE: appendCo-authored-by:trailer(s). Repeat to add multiple values. Accepted forms:Name <email@example.com>orcopilot(alias, case-insensitive).
Required:
OPENAI_API_KEY: when provider isopenaiGOOGLE_API_KEY: when provider isgoogle
Optional:
GIT_COMMIT_MESSAGE_PROVIDER: default provider (openaiby default).--provideroverrides this.GIT_COMMIT_MESSAGE_MODEL: model override for any provider.--modeloverrides this.OPENAI_MODEL: OpenAI-only model override (used if--model/GIT_COMMIT_MESSAGE_MODELare not set)OLLAMA_MODEL: Ollama-only model override (used if--model/GIT_COMMIT_MESSAGE_MODELare not set)OLLAMA_HOST: Ollama server URL (default:http://localhost:11434)LLAMACPP_HOST: llama.cpp server URL (default:http://localhost:8080)GIT_COMMIT_MESSAGE_LANGUAGE: default language/locale (default:en-GB)GIT_COMMIT_MESSAGE_CHUNK_TOKENS: default chunk token budget (default:0)
Default models (if not overridden):
- OpenAI:
gpt-5-mini - Google:
gemini-2.5-flash - Ollama:
gpt-oss:20b - llama.cpp: uses pre-loaded model (model parameter is ignored)
Parts of this project were created with assistance from AI tools (e.g. large language models). All AI-assisted contributions were reviewed and adapted by maintainers before inclusion. If you need provenance for specific changes, please refer to the Git history and commit messages.