Note
Fork of now archived https://github.com/MerrukTechnology/OpenCode-Native The focus is changed towards headless experience oriented towards autonomous agents.
OpenCode is a powerful CLI tool that brings AI assistance directly to your terminal. It provides both an interactive Terminal User Interface (TUI) for human users and a headless non-interactive mode ideal for scripting and autonomous agents. OpenCode acts as an intelligent coding assistant that can read, edit, and create files, execute shell commands, search through codebases, and integrate with various AI language models.
Designed for developers who prefer command-line workflows, OpenCode offers deep integration with popular AI providers while maintaining a seamless terminal experience. Whether you need help debugging code, implementing new features, or automating repetitive development tasks, OpenCode provides the tools and flexibility to get the job done efficiently.
- Interactive TUI built with Bubble Tea
- Non-interactive mode for headless automation and autonomous agents
- Flows: deterministic multi-step agent workflows defined in YAML (guide)
- Subagents: highly customizable agents calling another agents to do work [[#Agents]]
- Multiple AI providers: Anthropic, OpenAI, Google Gemini, AWS Bedrock, VertexAI, Kilo, Mistral, and self-hosted
- Tool integration: file operations, shell commands, code search, LSP code intelligence
- Structured output: enforce final agent's output with json schema, perfect for automated pipelines
- MCP support: extend capabilities via Model Context Protocol servers
- Agent skills: reusable instruction sets loaded on-demand (guide)
- Custom commands: predefined prompts with named arguments (guide)
- Session management with SQLite or MySQL storage (guide)
- LSP integration with auto-install for 30+ language servers (guide)
- File change tracking during sessions
curl -fsSL https://raw.githubusercontent.com/MerrukTechnology/OpenCode-Native/refs/heads/main/install | bash
# Specific version
curl -fsSL https://raw.githubusercontent.com/MerrukTechnology/OpenCode-Native/refs/heads/main/install | VERSION=0.1.0 bashbrew install MerrukTechnology/tap/opencodeyay -S opencode-native-bingo install github.com/MerrukTechnology/OpenCode-Native@latestopencode # Start TUI
opencode -d # Debug mode
opencode -c /path/to/project # Set working directory
opencode -a hivemind # Start with a specific agent
opencode -s <session-id> # Resume or create a session
opencode -s <session-id> -D # Delete session and start freshopencode -p "Explain context in Go" # Single prompt
opencode -p "Explain context in Go" -f json # JSON output
opencode -p "Explain context in Go" -q # Quiet (no spinner)
opencode -p "Refactor this module" -t 5m # With 5-minute timeoutopencode -p "Explain context in Go" -F review-code -A hash=93706ee # Run review-uncommited flow with args
opencode -p "Explain context in Go" -F ralph-does -s wiggum # Run flow with the pinned session dataAll permissions are auto-approved in non-interactive mode.
| Flag | Short | Description |
|---|---|---|
--help |
-h |
Display help |
--debug |
-d |
Enable debug mode |
--cwd |
-c |
Set working directory |
--prompt |
-p |
Non-interactive single prompt |
--agent |
-a |
Agent ID to use (e.g. coder, hivemind) |
--session |
-s |
Session ID to resume or create |
--delete |
-D |
Delete the session specified by --session before starting |
--output-format |
-f |
Output format: text (default), json |
--quiet |
-q |
Hide spinner in non-interactive mode |
--timeout |
-t |
Timeout for non-interactive mode (e.g. 10s, 30m, 1h) |
--flow |
-F |
Flow ID to execute, more info |
--arg |
-A |
Flow argument as key=value (repeatable) |
--args-file |
JSON file with flow arguments | |
--project-id |
-P |
Custom project ID to group sessions (overrides detected Git/basename) |
OpenCode looks for .opencode.json in:
./.opencode.json(project directory)$XDG_CONFIG_HOME/opencode/.opencode.json$HOME/.opencode.json
{
"data": {
"directory": ".opencode"
},
"providers": {
"openai": { "apiKey": "..." },
"anthropic": { "apiKey": "..." },
"gemini": { "apiKey": "..." },
"vertexai": {
"project": "your-project-id",
"location": "us-central1"
}
},
"agents": {
"coder": {
"model": "vertexai.claude-opus-4-6",
"maxTokens": 5000,
"reasoningEffort": "high"
},
"explorer": {
"model": "claude-4-5-sonnet[1m]",
"maxTokens": 5000
},
"summarizer": {
"model": "vertexai.gemini-3.0-flash",
"maxTokens": 5000
},
"descriptor": {
"model": "claude-4-5-sonnet[1m]",
"maxTokens": 80
}
},
"shell": {
"path": "/bin/bash",
"args": ["-l"]
},
"mcpServers": {
"example": {
"type": "stdio",
"command": "path/to/mcp-server",
"args": []
}
},
"lsp": {
"gopls": {
"initialization": { "codelenses": { "test": true } }
}
},
"sessionProvider": { "type": "sqlite" },
"skills": { "paths": ["~/my-skills"] },
"permission": {
"skill": { "*": "ask" },
"rules": {
"bash": { "*": "ask", "git *": "allow" },
"edit": { "*": "allow" }
}
},
"webSearch": {
"providers": {
"tavily": {
"baseUrl": "https://api.tavily.com/search",
"apiKey": "env:TAVILY_API_KEY",
"description": "Web search via Tavily"
}
}
},
"autoCompact": true,
"debug": false
}Each built-in agent can be customized:
| Agent | Mode | Purpose |
|---|---|---|
coder |
agent | Main coding agent (all tools) |
hivemind |
agent | Supervisory agent for coordinating subagents |
explorer |
subagent | Fast codebase exploration (read-only tools) |
workhorse |
subagent | Autonomous coding subagent (all tools) |
summarizer |
subagent | Session summarization |
descriptor |
subagent | Session title generation |
Agent fields:
| Field | Description |
|---|---|
model |
Model ID to use |
maxTokens |
Maximum response tokens |
reasoningEffort |
low, medium, high (default), max |
mode |
agent (primary, switchable via tab) or subagent (invoked via task tool) |
name |
Display name for the agent |
description |
Short description of agent's purpose |
permission |
Agent-specific permission overrides (supports granular glob patterns) |
tools |
Enable/disable specific tools (e.g., {"skill": false}) |
color |
Badge color for subagent indication in TUI |
Define custom agents as markdown files with YAML frontmatter. Discovery locations (merge priority, lowest to highest):
~/.config/opencode/agents/*.md— Global agents~/.agents/types/*.md— Global agents.opencode/agents/*.md— Project agents.agents/types/*.md— Project agents.opencode.jsonconfig — Highest priority
Example .opencode/agents/reviewer.md:
---
name: Code Reviewer
description: Reviews code for quality and best practices
mode: subagent
model: vertexai.claude-opus-4-6
color: info
tools:
bash: false
write: false
---
You are a code review specialist...The file basename (without .md) becomes the agent ID. Custom agents default to subagent mode.
When enabled (default), automatically summarizes conversations approaching the context window limit (95%) and continues in a new session.
{ "autoCompact": true }Override the default shell (falls back to $SHELL or /bin/bash):
{
"shell": {
"path": "/bin/zsh",
"args": ["-l"]
}
}{
"mcpServers": {
"stdio-example": {
"type": "stdio",
"command": "path/to/server",
"env": [],
"args": []
},
"sse-example": {
"type": "sse",
"url": "https://example.org/mcp",
"headers": { "Authorization": "Bearer token" }
},
"http-example": {
"type": "http",
"url": "https://example.com/mcp",
"headers": { "Authorization": "Bearer token" }
}
}
}OpenCode auto-detects and starts LSP servers for your project's languages. Over 30 servers are built-in with auto-install support. See the full LSP guide for details.
{
"lsp": {
"gopls": {
"env": { "GOFLAGS": "-mod=vendor" },
"initialization": { "codelenses": { "test": true } }
},
"typescript": { "disabled": true },
"my-lsp": {
"command": "my-lsp-server",
"args": ["--stdio"],
"extensions": [".custom"]
}
},
"disableLSPDownload": false
}Disable auto-download of LSP binaries via config ("disableLSPDownload": true) or env var (OPENCODE_DISABLE_LSP_DOWNLOAD=true).
Local endpoint:
export LOCAL_ENDPOINT=http://localhost:1235/v1
export LOCAL_ENDPOINT_API_KEY=secret{
"agents": {
"coder": {
"model": "local.granite-3.3-2b-instruct@q8_0"
}
}
}LiteLLM proxy:
{
"providers": {
"vertexai": {
"apiKey": "litellm-api-key",
"baseURL": "https://localhost/vertex_ai",
"headers": {
"x-litellm-api-key": "litellm-api-key"
}
}
}
}| Variable | Purpose |
|---|---|
ANTHROPIC_API_KEY |
Anthropic Claude models |
OPENAI_API_KEY |
OpenAI models |
GEMINI_API_KEY |
Google Gemini models |
VERTEXAI_PROJECT |
Google Cloud VertexAI |
VERTEXAI_LOCATION |
Google Cloud VertexAI |
VERTEXAI_LOCATION_COUNT |
VertexAI token count endpoint (global doesn't support) |
AWS_ACCESS_KEY_ID |
AWS Bedrock |
AWS_SECRET_ACCESS_KEY |
AWS Bedrock |
AWS_REGION |
AWS Bedrock |
KILO_API_KEY |
Kilo models |
MISTRAL_API_KEY |
Mistral models |
LOCAL_ENDPOINT |
Self-hosted model endpoint |
LOCAL_ENDPOINT_API_KEY |
Self-hosted model API key |
SHELL |
Default shell |
OPENCODE_SESSION_PROVIDER_TYPE |
sqlite (default) or mysql |
OPENCODE_MYSQL_DSN |
MySQL connection string |
OPENCODE_DISABLE_CLAUDE_SKILLS |
Disable .claude/skills/ discovery |
OPENCODE_DISABLE_LSP_DOWNLOAD |
Disable auto-install of LSP servers |
OpenCode is organized into several key packages that work together to provide a seamless AI coding experience:
internal/
├── agent/ # Agent registry and management
├── app/ # Application setup and LSP integration
├── completions/ # Shell completions
├── config/ # Configuration management
├── db/ # Database providers (SQLite, MySQL)
├── diff/ # Diff/patch functionality
├── fileutil/ # File utilities
├── format/ # Code formatting
├── history/ # File change tracking
├── llm/
│ ├── agent/ # Agent implementation and tool execution
│ ├── models/ # Model definitions and metadata
│ ├── prompt/ # System prompts for different agents
│ ├── provider/ # LLM provider implementations
│ └── tools/ # Tool implementations (edit, bash, grep, etc.)
├── logging/ # Logging infrastructure
├── lsp/ # LSP client and language server management
├── message/ # Message types and content handling
├── permission/ # Permission system for tool access
├── session/ # Session management
├── skill/ # Agent skills system
└── tui/ # Terminal User Interface
| Component | Purpose |
|---|---|
| Agent System | Manages different AI agent types (coder, hivemind, explorer, etc.) |
| LLM Providers | Unified interface for OpenAI, Anthropic, Google Gemini, VertexAI, and more |
| Tool System | File operations, shell commands, code search, LSP integration |
| Skills | Reusable instruction sets loaded on-demand for specialized tasks |
| Session Storage | SQLite or MySQL for persistent conversation history |
| LSP Client | Auto-detects languages and provides code intelligence |
For a detailed architecture overview, see the Providers and Models Guide.
OpenCode supports a wide range of LLM providers and models:
| Provider | Models |
|---|---|
| OpenAI | GPT-5, O3 Mini, O4 Mini, GPT-4o, GPT-4o-mini, GPT-4 Turbo |
| Anthropic | Claude 4.6 Sonnet (200K), Claude 4.6 Opus (200K), Claude 4.5 Sonnet |
| Google Gemini | Gemini 3.0 Pro, Gemini 3.0 Flash, Gemini 2.0 Flash |
| AWS Bedrock | Claude 4.5 Sonnet (via Bedrock) |
| VertexAI | Claude 4.6 Sonnet (1M), Claude 4.6 Opus (1M), Gemini 3.0 Pro, Gemini 3.0 Flash |
| Kilo | Kilo Auto |
| Mistral | Mistral models |
| Groq | Groq-hosted models |
| DeepSeek | DeepSeek chat models |
| OpenRouter | Aggregated access to 100+ models |
| Local/Ollama | Any OpenAI-compatible local model |
For a complete list of supported models and their configurations, see the Providers and Models Guide.
| Tool | Description |
|---|---|
glob |
Find files by pattern |
grep |
Search file contents |
ls |
List directory contents |
read |
Read file contents |
view_image |
View image files as base64 |
write |
Write to files |
edit |
Edit files |
multiedit |
Multiple edits in one file |
patch |
Apply patches to files |
lsp |
Code intelligence (go-to-definition, references, hover, etc.) |
delete |
Delete file or directory |
| Tool | Description |
|---|---|
bash |
Execute shell commands |
webfetch |
Fetch data from URLs |
websearch |
Search internet via configured WebSearch providers |
sourcegraph |
Search public repositories |
task |
Run sub-tasks with a subagent (supports subagent_type and task_id for resumption) |
skill |
Load agent skills on-demand |
struct_output |
Emit structured JSON conforming to a user-supplied schema |
| Shortcut | Action |
|---|---|
Ctrl+C |
Quit |
Ctrl+H |
Toggle help |
Ctrl+L |
View logs |
Ctrl+A |
Switch session |
Ctrl+N |
New session |
Ctrl+P |
Prune session |
Ctrl+K |
Command dialog |
Ctrl+O |
Model selection |
Ctrl+X |
Cancel generation |
Tab |
Switch primary agent |
Esc |
Close dialog / exit mode |
| Shortcut | Action |
|---|---|
i |
Focus editor |
Ctrl+S / Enter |
Send message |
Ctrl+E |
Open external editor |
Esc |
Blur editor |
| Shortcut | Action |
|---|---|
↑/k, ↓/j |
Navigate items |
←/h, →/l |
Switch tabs/providers |
Enter |
Select |
a / A / d |
Allow / Allow for session / Deny (permissions) |
| Topic | Link |
|---|---|
| Skills | docs/skills.md |
| Flows | docs/flows.md |
| Custom Commands | docs/custom-commands.md |
| Session Providers | docs/session-providers.md |
| LSP Servers | docs/lsp.md |
| Structured Output | docs/structured-output.md |
- Go 1.24.0 or higher
git clone https://github.com/MerrukTechnology/OpenCode-Native.git
cd opencode
make buildBuild and run OpenCode in a container:
# Build the Docker image (cross-compiles a Linux binary automatically)
make docker-buildAll CLI arguments are passed through directly:
# Non-interactive prompt
docker run opencode:latest -p "Explain context in Go" -f json -q
# Run a flow
docker run opencode:latest -F my-flow -A key1=value1 -A key2=value2
# With timeout
docker run opencode:latest -p "Refactor this module" -t 5m -qMount your configuration and workspace as volumes:
docker run -ti --rm \
-e LOCAL_ENDPOINT_API_KEY="${LOCAL_ENDPOINT_API_KEY}" \
-e LOCAL_ENDPOINT="${LOCAL_ENDPOINT}" \
-e VERTEXAI_PROJECT="${VERTEXAI_PROJECT}" \
-e VERTEXAI_LOCATION="${VERTEXAI_LOCATION:-global}" \
-e VERTEXAI_LOCATION_COUNT="${VERTEXAI_LOCATION_COUNT:-us-east5}" \
-v ~/.opencode.json:/workspace/.opencode.json \
-v $(pwd):/workspace \
--network opencode_default \
opencode:latest # you can pass args here, e.g. -p "Analyze this codebase"
To run non interactivly (you can pass [[#Command-Line Flags]])
docker run --rm \
-v ~/.opencode.json:/workspace/.opencode.json \
-v $(pwd):/workspace \
--network opencode_default \
opencode:latest -p "Analyze this codebase" -q
The container uses /workspace as its working directory. Mount .opencode.json there to provide configuration — it is not baked into the image.
make release SCOPE=patch
# or
make release SCOPE=minor- @isaacphi — mcp-language-server, foundation for the LSP client
- @adamdottv — Design direction and UI/UX architecture
- @kujtimiihoxha – Original OpenCode implementation
MIT — see LICENSE.
- Fork the repository
- Create a feature branch
- Commit your changes
- Open a Pull Request