OpenCode is a production-grade agent runtime that supports Anthropic, OpenAI, GitHub Copilot, and more. It already handles multi-step planning, MCP tool execution, and streaming across any provider you configure.
a2a-opencode exposes it as a standalone, interoperable agent via the A2A protocol. Drop a JSON config file in, get a fully spec-compliant A2A server out. Any orchestrator that speaks A2A can discover and call it — swap the LLM provider in one config line without changing orchestration code.
The pattern: MCP is the vertical rail — how agents access tools. A2A is the horizontal rail — how agents talk to each other. This library adds the horizontal rail to OpenCode, making it vendor-neutral by default.
Features:
- Full A2A v0.3.0 protocol — Agent Card, JSON-RPC, REST, SSE streaming
- Powered by OpenCode with support for any LLM provider (Anthropic, OpenAI, GitHub Copilot, and more)
- MCP tool server support — HTTP, SSE, stdio, and OAuth transports
- Multi-turn conversations via persistent OpenCode sessions
- SSE event streaming with automatic reconnect and polling fallback
- Auto-approval of tool permissions with configurable overrides
- JSON config file with layered overrides (JSON → env vars → CLI flags)
- Docker-ready with corporate proxy CA support
- TypeScript source with full type declarations
- Postman collection included for API exploration
Direct provider integrations work — but they create vendor lock-in at the integration layer. Switching from Claude to GPT-4.1 means rewriting SDK calls. Adding a second agent type means a second bespoke integration.
With the A2A protocol surface:
- Your orchestrator speaks one interface regardless of which provider is behind it
- Switch providers by changing one line in
config.json— orchestration stays the same - Run multiple specialized agents (different providers, different system prompts) behind a single protocol interface
- Any A2A-compatible system can discover and call your agent via Agent Card
This library complements — not replaces — frameworks like LangGraph, Google ADK, Microsoft Agent Framework, and CrewAI. Use those frameworks for orchestration, state, and memory control. Use a2a-opencode as the execution node they call.
LangGraph / ADK / Microsoft Agent Framework
(state, memory, flow control)
↓
A2A Protocol
↓
a2a-opencode
(OpenCode + any LLM provider)
# Install globally
npm install -g a2a-opencode
# Start OpenCode server (required — runs on port 4096 by default)
opencode serve
# In a separate terminal, run the bundled example agent
a2a-opencode --config agents/example/config.jsonOr run without installing:
npx a2a-opencode --config agents/example/config.jsonPrerequisites: OpenCode installed and running (
opencode serve). The wrapper connects to it onhttp://localhost:4096by default.
A2A Client (Orchestrator / Inspector / curl)
│
│ JSON-RPC or REST over HTTP
▼
Express Server (a2a-opencode)
│ ├─ /.well-known/agent-card.json → Agent Card
│ ├─ /a2a/jsonrpc → JSON-RPC (tasks/send, tasks/sendSubscribe, …)
│ ├─ /a2a/rest → REST handler
│ ├─ /context → Read context.md
│ ├─ /context/build → Trigger context discovery
│ └─ /health → Health check
│
│ @a2a-js/sdk DefaultRequestHandler
▼
OpenCodeExecutor (AgentExecutor)
│ ├─ SessionManager — contextId → OpenCode session
│ ├─ EventStreamManager — SSE polling + automatic reconnect
│ ├─ PermissionHandler — auto-approves tool calls
│ └─ EventPublisher — OpenCode events → A2A events
│
│ @opencode-ai/sdk (HTTP + SSE)
▼
OpenCode Server (opencode serve)
│ ├─ LLM inference (Anthropic, OpenAI, GitHub Copilot, …)
│ └─ MCP tool execution
│
│ MCP Protocol (HTTP / SSE / stdio / OAuth)
▼
MCP Servers (filesystem, custom tools, …)
# npm
npm install a2a-opencode
# yarn
yarn add a2a-opencode
# pnpm
pnpm add a2a-opencodea2a-opencode --config agents/example/config.jsonFull flag reference:
a2a-opencode [options]
--config, --agent-json <path> JSON agent config file
--port <number> Server port (default: 3000)
--hostname <addr> Bind address (default: 0.0.0.0)
--advertise-host <host> Hostname for agent card URLs (default: localhost)
--opencode-url <url> OpenCode server URL (default: http://localhost:4096)
--model <provider/model> LLM model (default: provider default)
e.g. anthropic/claude-sonnet-4-20250514
--agent <name> OpenCode agent preset
--directory <path> Project directory for OpenCode
--agent-name <name> Agent display name
--agent-description <desc> Agent description
--auto-approve Auto-approve all tool permissions (default: on)
--no-auto-approve Require manual permission approval
--auto-answer Auto-answer questions (default: on)
--no-auto-answer Do not auto-answer questions
--stream-artifacts Stream chunks in real time (A2A spec mode)
--no-stream-artifacts Buffer artifacts — Inspector-compatible (default)
--log-level <level> debug | info | warn | error (default: info)
--help Show this help
--version Show version
import { createA2AServer, resolveConfig } from 'a2a-opencode';
const config = await resolveConfig({ configPath: 'agents/example/config.json' });
const { server, url } = await createA2AServer(config);
console.log(`Agent running at ${url}`);Config is resolved in priority order: defaults ← JSON file ← env vars ← CLI flags
Create a config.json (see agents/example/config.json for the fully annotated template):
{
"agentCard": {
"name": "My Agent",
"description": "What my agent does",
"version": "1.0.0",
"protocolVersion": "0.3.0",
"streaming": true,
"skills": [
{
"id": "my-skill",
"name": "My Skill",
"description": "Describe the skill",
"tags": ["example"]
}
]
},
"server": {
"port": 3000,
"hostname": "0.0.0.0",
"advertiseHost": "localhost"
},
"opencode": {
"baseUrl": "http://localhost:4096",
"model": "anthropic/claude-sonnet-4-20250514",
"systemPrompt": "You are a specialist agent that...",
"contextFile": "context.md",
"autoApprove": true,
"autoAnswer": true
},
"mcp": {
"my-tools": {
"type": "http",
"url": "http://localhost:8002/mcp"
}
}
}| Variable | Description | Default |
|---|---|---|
PORT |
Server port | 3000 |
HOSTNAME |
Bind address | 0.0.0.0 |
ADVERTISE_HOST |
Hostname in agent card URLs | localhost |
OPENCODE_URL |
OpenCode server URL | http://localhost:4096 |
OPENCODE_MODEL |
LLM model (provider/model) |
(OpenCode default) |
WORKSPACE_DIR |
Project directory for OpenCode | (empty) |
AUTO_APPROVE |
Auto-approve tool permissions | true |
AUTO_ANSWER |
Auto-answer questions | true |
STREAM_ARTIFACTS |
Stream chunks in real time | false |
LOG_LEVEL |
debug|info|warn|error |
info |
AGENT_NAME |
Override agent card name | (from config) |
AGENT_DESCRIPTION |
Override agent card description | (from config) |
See .env.example for the full reference.
./agents/example/start.sh start
./agents/example/start.sh status
./agents/example/start.sh logs
./agents/example/start.sh stop
./agents/example/start.sh foreground # useful for debuggingRuns on port 3000. No external tools. Good starting point for custom agents.
The
start.shscript manages both the OpenCode subprocess and the A2A wrapper.
# Copy the example agent
cp -r agents/example agents/my-agent
# Edit the config
$EDITOR agents/my-agent/config.json
# Start it
./agents/my-agent/start.sh start"mcp": {
"my-tools": {
"type": "http",
"url": "http://localhost:8002/mcp"
}
}"mcp": {
"filesystem": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/workspace"]
}
}"mcp": {
"my-oauth-tools": {
"type": "oauth",
"url": "https://api.example.com/mcp",
"clientId": "...",
"clientSecret": "..."
}
}# Build
docker build -t a2a-opencode:latest .
# Run (OpenCode must be accessible from within the container)
docker run -p 3000:3000 \
a2a-opencode:latest --config agents/example/config.json
# Mount a custom agent config
docker run -p 3000:3000 \
-v /host/path/my-agent:/app/agents/my-agent \
a2a-opencode:latest --config agents/my-agent/config.jsonMount your CA certificate into the container and the entrypoint injects it automatically:
docker run -p 3000:3000 \
-v /path/to/corporate-ca.crt:/etc/ssl/certs/corporate-ca.crt:ro \
a2a-opencode:latest --config agents/example/config.jsonImplements A2A v0.3.0:
| Endpoint | Description |
|---|---|
GET /.well-known/agent-card.json |
Agent identity and capabilities |
POST /a2a/jsonrpc |
JSON-RPC: tasks/send, tasks/sendSubscribe, tasks/get, tasks/cancel |
POST /a2a/rest |
REST equivalent |
GET /health |
Health check |
POST /context/build |
Trigger context discovery |
GET /context |
Read the built context file |
Streaming uses SSE for real-time status updates and artifact chunks. Set --stream-artifacts for spec-correct chunk streaming or leave it unset (default) for buffered output compatible with the A2A Inspector.
A full Postman collection covering all endpoints is included at docs/A2A-OpenCode-Wrapper.postman_collection.json. Import it into Postman and set the baseUrl variable to your running agent's URL.
# Trigger the LLM to discover available data and write context.md
curl -X POST http://localhost:3000/context/build
# Read the built context
curl http://localhost:3000/contextContributions are welcome! Please read CONTRIBUTING.md first.
MIT © Shashi Kanth