opencode-as-mcp is a CLI MCP server that exposes a persistent OpenCode runtime through MCP tools while keeping Python/bash execution inside per-session containers.
The host process runs the MCP server, SQLite state, and workers. Each MCP session gets its own long-lived Docker container with a mounted /workspace and /session-meta. Inside that container, the worker starts opencode serve and drives it through the OpenCode HTTP API.
The important outcome is:
- persistent workspaces per session
- containerized execution for analysis/code tasks
- direct result fields in MCP run status
- logs and artifacts available for debugging
The current implementation has been validated end-to-end against the configured MCP server with:
- Docker runtime enabled
opencode serverunning inside the session container- an OpenAI-compatible provider endpoint
- default model
ollama/gpt-oss:20b
The main result now comes back in get_status(...).run.final_output.
There are two planes:
-
Control plane The worker talks to
opencode serveover HTTP. -
Execution plane The session container remains the execution environment and owns the mounted workspace.
That means this project does not run Python/bash on the MCP host. It orchestrates work, but the actual session runtime stays containerized.
src/opencode_as_mcp/server.pyFastMCP entrypoint and tool definitions.src/opencode_as_mcp/service.pySession/run orchestration and worker spawning.src/opencode_as_mcp/runtime.pyDocker runtime and local test runtime.src/opencode_as_mcp/opencode_client.pyMinimal OpenCode HTTP client.src/opencode_as_mcp/worker.pyAsync worker that ensures the OpenCode server exists, creates/reuses an OpenCode session, sends prompts, and persists results.src/opencode_as_mcp/store.pySQLite persistence for sessions and runs.src/opencode_as_mcp/models.pySession/run models and status constants.tests/test_core_flow.pyCore flow tests using a fake OpenCode server.tests/fake_opencode_server.pyLocal fake server for tests.Dockerfile.runtimeExplicit Docker image for the execution environment.
The server exposes these tools:
create_or_reuse_session(name?, reuse_key?, image?)submit_task(session_id, task)get_status(session_id, run_id?)get_run_logs(run_id, tail_lines=200)list_artifacts(session_id, run_id?)read_artifact(path, max_bytes=50000)cancel_run(run_id)terminate_session(session_id)
submit_task is asynchronous and returns immediately with a run id.
Poll get_status until the run reaches completed, failed, or canceled.
For completed runs:
run.final_outputFinal assistant text response.run.structured_outputJSON-encoded structured output when used.run.result_summaryCompact summary derived fromrun.log.
Run artifacts typically include:
assistant-response.jsonassistant-response.mdcontext.jsonrun.log
Runtime defaults:
- runtime mode:
docker - runtime image:
opencode-as-mcp-runtime:latest - OpenCode command:
opencode - provider:
ollama - model:
ollama/gpt-oss:20b - small model:
ollama/gpt-oss:20b - provider base URL:
http://host.docker.internal:11434/v1
The runtime image is intentionally separate from the Python package.
Host machine requirements:
- Python 3.12+
- Docker
- network access from the runtime to your selected model provider
- valid credentials for the selected provider
The execution image is built from Dockerfile.runtime.
Build the runtime image explicitly with:
docker build -f Dockerfile.runtime -t opencode-as-mcp-runtime:latest .If you want to publish the image and avoid local builds, you can push it to a registry and set MCP_RUNTIME_IMAGE accordingly.
You can install the MCP server as a CLI tool in several ways.
Using uv tool:
uv tool install .Using pipx:
pipx install .Using pip:
pip install .Using uvx from the source checkout:
uvx --from . opencode-as-mcpIf the package is published to an index:
uvx opencode-as-mcpNote that package installation and execution-image availability are separate concerns. The CLI package does not bundle the Docker runtime image.
For stdio MCP:
opencode-as-mcpThe server reads its configuration from environment variables.
Core settings:
export MCP_RUNTIME_MODE=docker
export MCP_STATE_DIR="$PWD/.opencode_as_mcp"
export MCP_RUNTIME_IMAGE=opencode-as-mcp-runtime:latest
export MCP_OPENCODE_COMMAND=opencode
export MCP_OPENCODE_PROVIDER=ollama
export MCP_OPENCODE_MODEL=ollama/gpt-oss:20b
export MCP_OPENCODE_SMALL_MODEL=ollama/gpt-oss:20b
export MCP_OPENCODE_BASE_URL=http://host.docker.internal:11434/v1
export MCP_OPENCODE_API_KEY=ollamaSupported providers:
ollamaopenroutergeminiopenaimistraldeepseekqwen
Provider notes:
- Models must stay in
provider/modelform and the prefix must matchMCP_OPENCODE_PROVIDER. MCP_OPENCODE_BASE_URLandMCP_OPENCODE_API_KEYare the generic settings used for all providers.MCP_OLLAMA_BASE_URLandMCP_OLLAMA_API_KEYare still accepted as backward-compatible aliases whenMCP_OPENCODE_PROVIDER=ollama.- The default Qwen endpoint is the international DashScope OpenAI-compatible URL. Override
MCP_OPENCODE_BASE_URLif you need a different region.
Example provider configurations:
# Ollama
export MCP_OPENCODE_PROVIDER=ollama
export MCP_OPENCODE_MODEL=ollama/gpt-oss:20b
export MCP_OPENCODE_SMALL_MODEL=ollama/gpt-oss:20b
export MCP_OPENCODE_BASE_URL=http://host.docker.internal:11434/v1
export MCP_OPENCODE_API_KEY=ollama
# OpenRouter
export MCP_OPENCODE_PROVIDER=openrouter
export MCP_OPENCODE_MODEL=openrouter/openai/gpt-4o
export MCP_OPENCODE_SMALL_MODEL=openrouter/openai/gpt-4o-mini
export MCP_OPENCODE_BASE_URL=https://openrouter.ai/api/v1
export MCP_OPENCODE_API_KEY="$OPENROUTER_API_KEY"
# Gemini
export MCP_OPENCODE_PROVIDER=gemini
export MCP_OPENCODE_MODEL=gemini/gemini-2.5-pro
export MCP_OPENCODE_SMALL_MODEL=gemini/gemini-2.5-flash
export MCP_OPENCODE_BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai
export MCP_OPENCODE_API_KEY="$GEMINI_API_KEY"
# OpenAI
export MCP_OPENCODE_PROVIDER=openai
export MCP_OPENCODE_MODEL=openai/gpt-4.1
export MCP_OPENCODE_SMALL_MODEL=openai/gpt-4.1-mini
export MCP_OPENCODE_BASE_URL=https://api.openai.com/v1
export MCP_OPENCODE_API_KEY="$OPENAI_API_KEY"
# Mistral
export MCP_OPENCODE_PROVIDER=mistral
export MCP_OPENCODE_MODEL=mistral/mistral-large-latest
export MCP_OPENCODE_SMALL_MODEL=mistral/mistral-small-latest
export MCP_OPENCODE_BASE_URL=https://api.mistral.ai/v1
export MCP_OPENCODE_API_KEY="$MISTRAL_API_KEY"
# DeepSeek
export MCP_OPENCODE_PROVIDER=deepseek
export MCP_OPENCODE_MODEL=deepseek/deepseek-chat
export MCP_OPENCODE_SMALL_MODEL=deepseek/deepseek-chat
export MCP_OPENCODE_BASE_URL=https://api.deepseek.com/v1
export MCP_OPENCODE_API_KEY="$DEEPSEEK_API_KEY"
# Qwen
export MCP_OPENCODE_PROVIDER=qwen
export MCP_OPENCODE_MODEL=qwen/qwen-plus
export MCP_OPENCODE_SMALL_MODEL=qwen/qwen-flash
export MCP_OPENCODE_BASE_URL=https://dashscope-intl.aliyuncs.com/compatible-mode/v1
export MCP_OPENCODE_API_KEY="$DASHSCOPE_API_KEY"Optional tuning:
export MCP_OPENCODE_SERVER_START_TIMEOUT=20
export MCP_OPENCODE_REQUEST_TIMEOUT=300
export MCP_WORKER_POLL_INTERVAL=0.2Optional runtime extension bundle:
export MCP_RUNTIME_EXTRAS_DIR="/absolute/path/to/runtime-extras"When set, that directory may contain:
requirements.txtExtra Python packages installed into the session runtime beforeopencode servestarts.skills/Extra skills copied into/workspace/skillsfor the session.
Bootstrap is applied once per session and recorded under /session-meta.
Example stdio MCP entry:
{
"command": "opencode-as-mcp",
"env": {
"MCP_RUNTIME_MODE": "docker",
"MCP_RUNTIME_IMAGE": "opencode-as-mcp-runtime:latest",
"MCP_OPENCODE_COMMAND": "opencode",
"MCP_OPENCODE_PROVIDER": "openrouter",
"MCP_OPENCODE_MODEL": "openrouter/openai/gpt-4o",
"MCP_OPENCODE_SMALL_MODEL": "openrouter/openai/gpt-4o-mini",
"MCP_OPENCODE_BASE_URL": "https://openrouter.ai/api/v1",
"MCP_OPENCODE_API_KEY": "your-api-key"
}
}If your agent runtime supports path-based package launching with uvx, the command can instead be:
{
"command": "uvx",
"args": ["--from", "/absolute/path/to/opencode-as-mcp", "opencode-as-mcp"]
}- Create a session:
{
"name": "analysis-session",
"reuse_key": "customer-dataset-a"
}-
Place files into the session workspace.
-
Submit a task:
{
"session_id": "<session_id>",
"task": "Profile the CSV files in /workspace and summarize the main issues."
}- Poll:
{
"session_id": "<session_id>",
"run_id": "<run_id>"
}- Read the final result from:
run.final_output
- Use logs/artifacts when needed:
get_run_logslist_artifactsread_artifact
Install development dependencies:
UV_CACHE_DIR=.uv-cache uv syncRun the core test suite:
UV_CACHE_DIR=.uv-cache uv run python -m unittest discover -s testsBuild a wheel:
UV_CACHE_DIR=.uv-cache uv buildThis repo has been validated in the real Docker/MCP path. Two caveats still apply when verifying inside restricted sandboxes:
- local fake-server tests may fail if localhost binds are blocked
- package installation with dependency resolution may fail without network access to fetch
fastmcp
The project itself now builds cleanly as a wheel with standard packaging metadata, and the CLI entrypoint installs correctly.