A local-first, terminal-native agentic coding assistant that runs LLMs entirely on your machine via Ollama. No API keys. No cloud. No telemetry.
- 100% Local — All inference runs on your hardware via Ollama. Your code never leaves your machine.
- Agentic Tool Loop — Read/write files, run shell commands, git operations — with confirmation for destructive actions.
- Hardware-Aware Model Recommendations — Detects your RAM, VRAM, and GPU to recommend models that will actually run well.
- MCP Plugin System — Compatible with the existing Model Context Protocol ecosystem (stdio and SSE transport).
- Skills — Reusable, parameterized prompt templates stored as Markdown files (e.g.,
/commit,/review). - Layered Memory —
MITA.mdfiles at global, project, and directory scope are automatically injected into context. - Layered Config — TOML configuration cascades from global to project level.
- Hooks — Lifecycle shell commands that fire on events like file writes or tool calls.
- Codebase Indexing — Local vector search (LanceDB + Tree-sitter) for RAG over your codebase.
- Unix Philosophy — Composable, pipeable, scriptable.
- Python 3.11+
- Ollama installed and running
pipx install mita-codeOr for development:
git clone https://github.com/jtdub/mita-code.git
cd mita-code
poetry install# Start Ollama (if not already running)
ollama serve
# Pull a coding model
mita models pull qwen2.5-coder:7b
# Start an interactive session
mita chat
# Or ask a single question
mita ask "explain the auth module in this project"mita chat # Start agentic chat session
mita chat --model deepseek-coder-v2:16b # Use a specific model
mita chat --no-tools # Pure chat, no tool executionmita ask "refactor this function to use async"
cat error.log | mita ask "what went wrong?"mita models recommend # See what fits your hardware
mita models list # List installed models
mita models pull qwen2.5-coder:14b # Pull a model
mita models default qwen2.5-coder:14b # Set as defaultmita memory show # View all active memory
mita memory add "Always use pytest" --project # Add project-level memory
mita memory edit # Edit nearest MITA.mdmita index build # Index the current project
mita index search "database connection" # Search the indexmita skills list # List available skills
# In chat, use /skill_name to invoke:
# mita> /commit
# mita> /reviewmita plugins add filesystem --command "npx @modelcontextprotocol/server-filesystem ."
mita plugins list # List plugins and their toolsmita config show # Show merged configuration
mita config edit --global # Edit global config
mita config set model.default "qwen2.5-coder:14b"mita doctor # Check Ollama, models, config healthGlobal config lives at ~/.config/mita/config.toml. Project-level overrides go in .mita/settings.toml.
[model]
default = "qwen2.5-coder:7b"
temperature = 0.1
[tools]
auto_approve = ["file_read", "glob", "grep"]
confirm_destructive = true
[index]
enabled = true
top_k = 10See PLANNING.md for the full configuration schema.
Mita uses layered MITA.md files that are automatically discovered and injected into context:
| Scope | Location | Purpose |
|---|---|---|
| Global | ~/.config/mita/MITA.md |
Preferences across all projects |
| Project | <project_root>/MITA.md |
Project-specific conventions |
| Directory | <subdir>/MITA.md |
Directory-specific context |
Higher-specificity files take priority. Each file is capped at 200 lines.
| Component | Library |
|---|---|
| CLI | Typer |
| Terminal UI | Rich |
| LLM Runtime | Ollama |
| LLM Client | LiteLLM |
| Structured Output | Instructor |
| Vector Store | LanceDB |
| Code Parsing | Tree-sitter |
| Config | TOML (stdlib tomllib) |
| Plugins | MCP |
See PLANNING.md for the full project plan, architecture, and build phases.
Full documentation is available at mita-code.readthedocs.io.
Apache 2.0 — see LICENSE.