Skip to content

CIMAFoundation/opencode-mcp

Repository files navigation

opencode-as-mcp

opencode-as-mcp is a CLI MCP server that exposes a persistent OpenCode runtime through MCP tools while keeping Python/bash execution inside per-session containers.

The host process runs the MCP server, SQLite state, and workers. Each MCP session gets its own long-lived Docker container with a mounted /workspace and /session-meta. Inside that container, the worker starts opencode serve and drives it through the OpenCode HTTP API.

The important outcome is:

  • persistent workspaces per session
  • containerized execution for analysis/code tasks
  • direct result fields in MCP run status
  • logs and artifacts available for debugging

Current status

The current implementation has been validated end-to-end against the configured MCP server with:

  • Docker runtime enabled
  • opencode serve running inside the session container
  • an OpenAI-compatible provider endpoint
  • default model ollama/gpt-oss:20b

The main result now comes back in get_status(...).run.final_output.

Architecture

There are two planes:

  1. Control plane The worker talks to opencode serve over HTTP.

  2. Execution plane The session container remains the execution environment and owns the mounted workspace.

That means this project does not run Python/bash on the MCP host. It orchestrates work, but the actual session runtime stays containerized.

Repository layout

  • src/opencode_as_mcp/server.py FastMCP entrypoint and tool definitions.
  • src/opencode_as_mcp/service.py Session/run orchestration and worker spawning.
  • src/opencode_as_mcp/runtime.py Docker runtime and local test runtime.
  • src/opencode_as_mcp/opencode_client.py Minimal OpenCode HTTP client.
  • src/opencode_as_mcp/worker.py Async worker that ensures the OpenCode server exists, creates/reuses an OpenCode session, sends prompts, and persists results.
  • src/opencode_as_mcp/store.py SQLite persistence for sessions and runs.
  • src/opencode_as_mcp/models.py Session/run models and status constants.
  • tests/test_core_flow.py Core flow tests using a fake OpenCode server.
  • tests/fake_opencode_server.py Local fake server for tests.
  • Dockerfile.runtime Explicit Docker image for the execution environment.

MCP tool surface

The server exposes these tools:

  • create_or_reuse_session(name?, reuse_key?, image?)
  • submit_task(session_id, task)
  • get_status(session_id, run_id?)
  • get_run_logs(run_id, tail_lines=200)
  • list_artifacts(session_id, run_id?)
  • read_artifact(path, max_bytes=50000)
  • cancel_run(run_id)
  • terminate_session(session_id)

Result model

submit_task is asynchronous and returns immediately with a run id.

Poll get_status until the run reaches completed, failed, or canceled.

For completed runs:

  • run.final_output Final assistant text response.
  • run.structured_output JSON-encoded structured output when used.
  • run.result_summary Compact summary derived from run.log.

Run artifacts typically include:

  • assistant-response.json
  • assistant-response.md
  • context.json
  • run.log

Defaults

Runtime defaults:

  • runtime mode: docker
  • runtime image: opencode-as-mcp-runtime:latest
  • OpenCode command: opencode
  • provider: ollama
  • model: ollama/gpt-oss:20b
  • small model: ollama/gpt-oss:20b
  • provider base URL: http://host.docker.internal:11434/v1

The runtime image is intentionally separate from the Python package.

Requirements

Host machine requirements:

  • Python 3.12+
  • Docker
  • network access from the runtime to your selected model provider
  • valid credentials for the selected provider

The execution image is built from Dockerfile.runtime.

Build the execution image

Build the runtime image explicitly with:

docker build -f Dockerfile.runtime -t opencode-as-mcp-runtime:latest .

If you want to publish the image and avoid local builds, you can push it to a registry and set MCP_RUNTIME_IMAGE accordingly.

Install the CLI package

You can install the MCP server as a CLI tool in several ways.

Using uv tool:

uv tool install .

Using pipx:

pipx install .

Using pip:

pip install .

Using uvx from the source checkout:

uvx --from . opencode-as-mcp

If the package is published to an index:

uvx opencode-as-mcp

Note that package installation and execution-image availability are separate concerns. The CLI package does not bundle the Docker runtime image.

Run the MCP server

For stdio MCP:

opencode-as-mcp

The server reads its configuration from environment variables.

Core settings:

export MCP_RUNTIME_MODE=docker
export MCP_STATE_DIR="$PWD/.opencode_as_mcp"
export MCP_RUNTIME_IMAGE=opencode-as-mcp-runtime:latest
export MCP_OPENCODE_COMMAND=opencode
export MCP_OPENCODE_PROVIDER=ollama
export MCP_OPENCODE_MODEL=ollama/gpt-oss:20b
export MCP_OPENCODE_SMALL_MODEL=ollama/gpt-oss:20b
export MCP_OPENCODE_BASE_URL=http://host.docker.internal:11434/v1
export MCP_OPENCODE_API_KEY=ollama

Supported providers:

  • ollama
  • openrouter
  • gemini
  • openai
  • mistral
  • deepseek
  • qwen

Provider notes:

  • Models must stay in provider/model form and the prefix must match MCP_OPENCODE_PROVIDER.
  • MCP_OPENCODE_BASE_URL and MCP_OPENCODE_API_KEY are the generic settings used for all providers.
  • MCP_OLLAMA_BASE_URL and MCP_OLLAMA_API_KEY are still accepted as backward-compatible aliases when MCP_OPENCODE_PROVIDER=ollama.
  • The default Qwen endpoint is the international DashScope OpenAI-compatible URL. Override MCP_OPENCODE_BASE_URL if you need a different region.

Example provider configurations:

# Ollama
export MCP_OPENCODE_PROVIDER=ollama
export MCP_OPENCODE_MODEL=ollama/gpt-oss:20b
export MCP_OPENCODE_SMALL_MODEL=ollama/gpt-oss:20b
export MCP_OPENCODE_BASE_URL=http://host.docker.internal:11434/v1
export MCP_OPENCODE_API_KEY=ollama

# OpenRouter
export MCP_OPENCODE_PROVIDER=openrouter
export MCP_OPENCODE_MODEL=openrouter/openai/gpt-4o
export MCP_OPENCODE_SMALL_MODEL=openrouter/openai/gpt-4o-mini
export MCP_OPENCODE_BASE_URL=https://openrouter.ai/api/v1
export MCP_OPENCODE_API_KEY="$OPENROUTER_API_KEY"

# Gemini
export MCP_OPENCODE_PROVIDER=gemini
export MCP_OPENCODE_MODEL=gemini/gemini-2.5-pro
export MCP_OPENCODE_SMALL_MODEL=gemini/gemini-2.5-flash
export MCP_OPENCODE_BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai
export MCP_OPENCODE_API_KEY="$GEMINI_API_KEY"

# OpenAI
export MCP_OPENCODE_PROVIDER=openai
export MCP_OPENCODE_MODEL=openai/gpt-4.1
export MCP_OPENCODE_SMALL_MODEL=openai/gpt-4.1-mini
export MCP_OPENCODE_BASE_URL=https://api.openai.com/v1
export MCP_OPENCODE_API_KEY="$OPENAI_API_KEY"

# Mistral
export MCP_OPENCODE_PROVIDER=mistral
export MCP_OPENCODE_MODEL=mistral/mistral-large-latest
export MCP_OPENCODE_SMALL_MODEL=mistral/mistral-small-latest
export MCP_OPENCODE_BASE_URL=https://api.mistral.ai/v1
export MCP_OPENCODE_API_KEY="$MISTRAL_API_KEY"

# DeepSeek
export MCP_OPENCODE_PROVIDER=deepseek
export MCP_OPENCODE_MODEL=deepseek/deepseek-chat
export MCP_OPENCODE_SMALL_MODEL=deepseek/deepseek-chat
export MCP_OPENCODE_BASE_URL=https://api.deepseek.com/v1
export MCP_OPENCODE_API_KEY="$DEEPSEEK_API_KEY"

# Qwen
export MCP_OPENCODE_PROVIDER=qwen
export MCP_OPENCODE_MODEL=qwen/qwen-plus
export MCP_OPENCODE_SMALL_MODEL=qwen/qwen-flash
export MCP_OPENCODE_BASE_URL=https://dashscope-intl.aliyuncs.com/compatible-mode/v1
export MCP_OPENCODE_API_KEY="$DASHSCOPE_API_KEY"

Optional tuning:

export MCP_OPENCODE_SERVER_START_TIMEOUT=20
export MCP_OPENCODE_REQUEST_TIMEOUT=300
export MCP_WORKER_POLL_INTERVAL=0.2

Optional runtime extension bundle:

export MCP_RUNTIME_EXTRAS_DIR="/absolute/path/to/runtime-extras"

When set, that directory may contain:

  • requirements.txt Extra Python packages installed into the session runtime before opencode serve starts.
  • skills/ Extra skills copied into /workspace/skills for the session.

Bootstrap is applied once per session and recorded under /session-meta.

Example agent runtime configuration

Example stdio MCP entry:

{
  "command": "opencode-as-mcp",
  "env": {
    "MCP_RUNTIME_MODE": "docker",
    "MCP_RUNTIME_IMAGE": "opencode-as-mcp-runtime:latest",
    "MCP_OPENCODE_COMMAND": "opencode",
    "MCP_OPENCODE_PROVIDER": "openrouter",
    "MCP_OPENCODE_MODEL": "openrouter/openai/gpt-4o",
    "MCP_OPENCODE_SMALL_MODEL": "openrouter/openai/gpt-4o-mini",
    "MCP_OPENCODE_BASE_URL": "https://openrouter.ai/api/v1",
    "MCP_OPENCODE_API_KEY": "your-api-key"
  }
}

If your agent runtime supports path-based package launching with uvx, the command can instead be:

{
  "command": "uvx",
  "args": ["--from", "/absolute/path/to/opencode-as-mcp", "opencode-as-mcp"]
}

Example flow

  1. Create a session:
{
  "name": "analysis-session",
  "reuse_key": "customer-dataset-a"
}
  1. Place files into the session workspace.

  2. Submit a task:

{
  "session_id": "<session_id>",
  "task": "Profile the CSV files in /workspace and summarize the main issues."
}
  1. Poll:
{
  "session_id": "<session_id>",
  "run_id": "<run_id>"
}
  1. Read the final result from:
  • run.final_output
  1. Use logs/artifacts when needed:
  • get_run_logs
  • list_artifacts
  • read_artifact

Local development

Install development dependencies:

UV_CACHE_DIR=.uv-cache uv sync

Run the core test suite:

UV_CACHE_DIR=.uv-cache uv run python -m unittest discover -s tests

Build a wheel:

UV_CACHE_DIR=.uv-cache uv build

Notes on verification

This repo has been validated in the real Docker/MCP path. Two caveats still apply when verifying inside restricted sandboxes:

  • local fake-server tests may fail if localhost binds are blocked
  • package installation with dependency resolution may fail without network access to fetch fastmcp

The project itself now builds cleanly as a wheel with standard packaging metadata, and the CLI entrypoint installs correctly.

About

MCP Server implementing a containerized opencode server with python runtime

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages