A local-first Python AI coding assistant that works with any OpenAI-compatible API.
OpenClose runs entirely on your machine. Connect it to vLLM, llama.cpp, Ollama, or OpenAI — all conversations and data stay in a local SQLite database. The web UI streams responses in real time via SSE, and a headless CLI mode lets you script it into workflows.
- Local-first — SQLite storage, no cloud, no remote auth, no telemetry
- Provider-agnostic — works with any OpenAI-compatible endpoint (vLLM, llama.cpp, Ollama, OpenAI)
- Web UI — three-column layout with session sidebar, chat, and context info panel
- Real-time streaming — Server-Sent Events for live token-by-token output
- Two built-in agents —
build(full tool access) andplan(read-only analysis) - Custom agents — define your own in
config.tomlwith inheritance, custom prompts, and tool restrictions - 13 built-in tools — read, write, edit, glob, grep, ls, bash, patch, webfetch, todo, question, lint, multiedit
- Permission system — per-tool allow/deny/ask rules with last-match-wins semantics
- Auto-formatting — ruff, black, gofmt, rustfmt, prettier, shfmt, clang-format
- Linting — ruff, mypy, eslint, golangci-lint, clippy, clang-tidy, cppcheck
- Context management — automatic compaction when approaching the context window limit
- CLI mode —
openclose run -p "..."for non-interactive scripting with optional JSON output
Requires Python 3.12+.
pip install opencloseuv pip install openclosegit clone https://github.com/leflakk/openclose.git
cd openclose
uv syncCreate ~/.config/openclose/config.toml (Linux) or ~/Library/Application Support/openclose/config.toml (macOS):
[[providers]]
name = "default"
base_url = "http://localhost:8000/v1"If your provider requires an API key, set it via environment variable or config:
export OPENCLOSE_API_KEY="sk-..."
# or
export OPENAI_API_KEY="sk-..."Key resolution order: OPENCLOSE_API_KEY env var > OPENAI_API_KEY env var > config file api_key field.
openclose serveOpens your browser to http://127.0.0.1:9876. Use --host, --port, --no-browser, or --project-dir to customize.
openclose run -p "Explain the main function in src/main.py"
openclose run -p "Add error handling to the parser" --agent build
openclose run -p "Analyze the test coverage gaps" --agent plan --jsonopenclose sessionsConfiguration is loaded in priority order (highest wins):
- Environment variables (
OPENCLOSE_*) - Project config (
.openclose/config.tomlin your project directory) - User config (
~/.config/openclose/config.toml) - Defaults
Example config.toml:
# Provider — any OpenAI-compatible endpoint
[[providers]]
name = "default"
base_url = "http://localhost:8000/v1"
api_key = ""
default_model = ""
# Session defaults
default_agent = "build"
max_context_tokens = 128000
compaction_threshold = 0.9
# Override built-in agent settings
[[agents]]
name = "build"
model = "qwen3-30b-a3b"
# Custom agent with inheritance
[[agents]]
name = "review"
extends = "plan"
description = "Code reviewer"
system_prompt = "You are a code reviewer. Focus on bugs, security, and clarity."
# Permission rules (last match wins)
[[permissions]]
tool = "*"
action = "ask"
[[permissions]]
tool = "read"
action = "allow"
[[permissions]]
tool = "bash"
path = "/tmp/*"
action = "deny"See docs/guide-agents-customization.md for the full agent customization guide.
| Agent | Description | Tool restrictions |
|---|---|---|
build |
Primary agent with full tool access for code writing and execution | None |
plan |
Read-only analysis agent (extends build) |
Denied: write, edit, bash, patch |
Custom agents support:
- Inheritance via
extends— create variants without repeating config - Semantic traits — e.g.
["readonly"]adjusts the system prompt automatically - Tool filtering —
allowed_toolsanddenied_toolscontrol what the LLM can call - Custom system prompts with
$variablesubstitution
The web UI has a three-column layout: sessions sidebar, chat panel, and context/files sidebar.
| Command | Description |
|---|---|
/new |
Start a new session |
/sessions |
Switch to another session |
/agents |
Switch agent |
/compact |
Compress context window |
/undo |
Remove last message pair |
/export |
Export session as JSON |
/copy |
Copy last response |
/skip |
Toggle auto-approve permissions |
/help |
Show available commands |
Sessions can be forked to continue a conversation with a different agent.
See ARCHITECTURE.md for the full package structure, design decisions, and data flow diagrams.
git clone https://github.com/leflakk/openclose.git
cd openclose
uv sync
uv run pytest tests/Code quality requirements:
- Linting —
uv run ruff check src/ tests/ - Type checking —
uv run mypy --strict src/ tests/ - Tests —
uv run pytest tests/ --cov=openclose --cov-fail-under=80
CI runs all three checks on every push and pull request via GitHub Actions.