AI-powered commit message generator for VS Code. Get perfect commits in seconds using local AI with Ollama.
Better than writing commits yourself:
- Consistent format across your team
- Never forget what you changed
- Follows best practices automatically
- Saves 5+ minutes per day
Completely free and private:
- Zero API costs - runs entirely on your machine
- Complete privacy - your code never leaves your computer
- Offline use - works without internet
- No account required - no API keys needed
- Install Ollama (takes 1-2 minutes)
- Run:
ollama pull qwen2.5-coder - Install extension
- Click the robot icon in your Git panel
- Select "Local" provider
- Done! No API key, no credit card, no tracking.
- AI-Powered Commit Messages: Uses AI to analyze your changes and generate meaningful commit messages
- Local AI Only: Privacy-first approach with Ollama for completely local operation
- Conventional Commits Support: Automatically follows conventional commit format with configurable types and scopes
- Smart Caching: Reuses recent commit messages for identical changes
- Review Mode: Preview and approve commit messages before applying
- Customizable: Configure AI model, prompt template, and conventional commit rules
- Local Processing: Your code never leaves your machine
- No API Keys: No account or credentials required
- Offline Capable: Works without internet
- Open Source: Audit the code yourself
GitMsgOllama uses Ollama for completely local, private AI-powered commit messages.
| Model | Size | RAM Required | Best For |
|---|---|---|---|
qwen2.5-coder |
~4GB | 8GB | Code-focused, recommended |
codellama:7b |
3.8GB | 8GB | Coding, fast responses |
mistral:7b |
4.1GB | 8GB | General purpose, quality |
llama2:13b |
7.3GB | 16GB | Higher quality |
deepseek-coder:6.7b |
3.8GB | 8GB | Code-specific tasks |
-
Install Ollama from Ollama.ai
-
Download a model:
ollama pull qwen2.5-coder # or ollama pull codellama:7b -
Install the extension from VS Code Marketplace
-
Configure in VS Code:
- Open Command Palette (Ctrl+Shift+P / Cmd+Shift+P)
- Run:
GitMsgOllama: Select Provider - Choose "Local"
- Set base URL:
http://localhost:11434/v1(default) - Run:
GitMsgOllama: Select Model - Choose your downloaded model
-
Start generating commit messages!
For detailed setup instructions and troubleshooting, see docs/PROVIDERS.md.
- Minimum: 8GB RAM (for 7B models)
- Recommended: 16GB+ RAM (for 13B models)
- GPU: Optional but significantly faster (NVIDIA/AMD/Apple Silicon)
- VS Code
- Ollama (running locally)
- Make changes to your code
- Stage your changes in Git (using Source Control panel)
- Click the robot head icon in the Source Control message input box or title bar
- The AI will analyze your changes and generate a commit message
- Edit the message if needed and commit as usual
When you generate a commit message for the same set of changes:
- The extension checks if a cached message exists
- If found, you'll see: "Found cached suggestion from X minutes ago"
- Choose:
- Use Cached - Apply the previously generated message
- Generate New - Create a fresh message
- Dismiss - Cancel the operation
To clear the cache:
- Command Palette →
GitMsgOllama: Clear Cache
Enable review mode to preview messages before applying:
{
"gitmsgollama.reviewBeforeApply": true
}With review mode enabled:
- Generate a commit message
- Review the suggested message in a quick pick dialog
- Choose to accept, edit, or regenerate
The extension automatically generates conventional commit messages. Example output:
feat: add user authentication
fix(api): handle null response from endpoint
docs: update installation instructions
refactor(utils): simplify date formatting logic
Configure conventional commits:
{
"gitmsgollama.conventionalCommits.enabled": true,
"gitmsgollama.conventionalCommits.types": ["feat", "fix", "docs", "style", "refactor", "test", "chore"],
"gitmsgollama.conventionalCommits.enableScopeDetection": true
}| Setting | Type | Default | Description |
|---|---|---|---|
gitmsgollama.provider |
string | local |
AI provider (local only) |
gitmsgollama.local.model |
string | qwen2.5-coder |
Local model (Ollama) |
gitmsgollama.local.baseUrl |
string | http://localhost:11434/v1 |
Ollama server URL |
gitmsgollama.prompt |
string | (see below) | Custom prompt template (use {changes} placeholder) |
gitmsgollama.timeout |
number | 30 |
API request timeout in seconds |
| Setting | Type | Default | Description |
|---|---|---|---|
gitmsgollama.reviewBeforeApply |
boolean | true |
Review messages before applying |
gitmsgollama.enableCache |
boolean | true |
Enable commit message caching |
gitmsgollama.cacheSize |
number | 10 |
Maximum cached messages |
| Setting | Type | Default | Description |
|---|---|---|---|
gitmsgollama.conventionalCommits.enabled |
boolean | true |
Enable conventional commits support |
gitmsgollama.conventionalCommits.types |
array | ["feat", "fix", "docs", ...] |
Allowed commit types |
gitmsgollama.conventionalCommits.scopes |
array | [] |
Allowed scopes (empty = any) |
gitmsgollama.conventionalCommits.enableScopeDetection |
boolean | true |
Auto-detect scope from file paths |
gitmsgollama.conventionalCommits.requireScope |
boolean | false |
Require scope in messages |
The default prompt generates conventional commit messages:
Given these staged changes:
{changes}
Generate a commit message that follows these rules:
1. Start with a type (feat/fix/docs)
2. Keep it under 50 characters
3. Use imperative mood
You can customize this in settings to match your team's commit message style.
When you generate a commit message with local AI:
- Git diff of your staged changes stays on your machine
- Your prompt template with the diff stays local
- No external servers - nothing is sent to the internet
- No API keys or accounts required
- Your code never leaves your machine
- No data is sent to external AI providers
- No telemetry or analytics
- No usage tracking
- Open source - audit the code yourself
- Ensure Ollama is running
- Check base URL matches:
http://localhost:11434/v1 - Check firewall isn't blocking local connections
- Verify model is downloaded: Run
ollama list - Check model name matches exactly
- Download a model:
ollama pull qwen2.5-coder
- Use smaller model (7B instead of 13B)
- Enable GPU acceleration in Ollama settings
- Reduce context length
- Use quantized models
- Use smaller model
- Close other applications
- Increase system swap/page file
For more detailed troubleshooting, see docs/PROVIDERS.md.
| Command | Description |
|---|---|
GitMsgOllama: Generate Commit Message |
Generate a commit message for staged changes |
GitMsgOllama: Select Provider |
Choose your AI provider (Local) |
GitMsgOllama: Set API Key |
Not required for Local (no API key needed) |
GitMsgOllama: Select Model |
Browse and select from available Ollama models |
GitMsgOllama: Test Provider Connection |
Test your Ollama configuration |
GitMsgOllama: Clear Cache |
Clear all cached commit messages |
Want to contribute? See development setup:
- Clone the repository
- Run
npm install - Open in VS Code
- Press F5 to start debugging
- Make your changes and submit a PR
This project is licensed under the MIT License - see the LICENSE file for details.
- Report bugs: GitHub Issues
- Security vulnerabilities: See SECURITY.md
- Feature requests: GitHub Issues
This project is a fork of GitMsgAI by Chase Rich. The original project provided the foundation for this privacy-focused Ollama-only version.