A Node/TypeScript MCP server that exposes a Streamable HTTP /mcp endpoint and translates MCP tool calls into OpenCode HTTP API calls.
This gateway runs opencode serve and the MCP HTTP server in a single container. MCP clients connect to /mcp on port 8080; the gateway forwards requests to the local OpenCode process on port 4096.
Northbound:
- accepts MCP clients over Streamable HTTP at
/mcp
Southbound:
- calls OpenCode over HTTP for session creation, message submission, status, diffs, and aborts
run_coding_taskrun_coding_task_asyncget_task_statusget_task_messagesget_task_diffabort_tasklist_agentshealth_check
| Variable | Default | Description |
|---|---|---|
PORT |
8080 |
Port the MCP gateway listens on |
HOST |
0.0.0.0 |
Bind address |
MCP_PATH |
/mcp |
MCP endpoint path |
ALLOWED_HOSTS |
(none) | Comma-separated allowed Host headers |
MCP_BEARER_TOKEN |
(none) | If set, all /mcp requests require Authorization: Bearer <token> |
OPENCODE_BASE_URL |
http://127.0.0.1:4096 |
OpenCode API base URL |
OPENCODE_USERNAME |
opencode |
Basic auth username (used only when password is set) |
OPENCODE_PASSWORD |
(none) | Basic auth password for opencode serve |
DEFAULT_AGENT |
build |
Default OpenCode agent |
DEFAULT_MODEL |
(none) | Model override |
SESSION_TITLE_PREFIX |
OCP Gateway |
Prefix for session titles |
OPENCODE_API_URL |
(required) | Base URL of the LLM provider API |
OPENCODE_API_KEY |
(required) | API key for the LLM provider |
OPENCODE_MODEL_NAME |
(required) | Model name to use |
The image bakes the opencode binary in at build time — no init containers needed.
docker build -t <registry>/opencode-mcp-gateway:latest .
docker push <registry>/opencode-mcp-gateway:latestFor a local OpenShift cluster (CRC / microshift) you can push directly to the internal registry:
# Log in to the internal registry
oc registry login
docker build -t $(oc registry info)/opencode/opencode-mcp-gateway:latest .
docker push $(oc registry info)/opencode/opencode-mcp-gateway:latesthelm install opencode-mcp-gateway ./chart \
--namespace opencode --create-namespace \
--set image.repository=<registry>/opencode-mcp-gateway \
--set image.tag=latest \
--set opencode.apiUrl=https://your-llm.example.com/v1 \
--set opencode.apiKey=<api-key> \
--set opencode.modelName=<model-name>To secure the /mcp endpoint with a bearer token:
--set gateway.mcpBearerToken=<token># Wait for the pod to be Ready
oc rollout status deployment/opencode-mcp-gateway -n opencode
# Get the Route hostname (auto-generated from your cluster's apps domain)
oc get route opencode-mcp-gateway -n opencode -o jsonpath='{.spec.host}'
# Health check
curl https://<route-host>/healthz
# {"gateway":"ok","opencode":{"healthy":true,...}}
# Readiness check
curl https://<route-host>/readyz
# readyPoint your MCP client at the Route over HTTPS:
https://<route-host>/mcp
If MCP_BEARER_TOKEN was set, include it in every request:
Authorization: Bearer <token>
| Value | Default | Description |
|---|---|---|
image.repository |
"" |
Image registry and name (required) |
image.tag |
latest |
Image tag |
image.pullPolicy |
IfNotPresent |
Pull policy |
replicaCount |
1 |
Number of replicas — keep at 1 (in-memory sessions) |
opencode.apiUrl |
"" |
LLM provider base URL (required) |
opencode.apiKey |
"" |
LLM provider API key (required) |
opencode.modelName |
"" |
Model name (required) |
gateway.mcpBearerToken |
"" |
Bearer token to protect /mcp |
gateway.defaultAgent |
build |
Default OpenCode agent |
gateway.defaultModel |
"" |
Model override per request |
gateway.sessionTitlePrefix |
OCP Gateway |
Session title prefix |
gateway.allowedHosts |
"" |
Comma-separated allowed Host headers |
route.enabled |
true |
Create an OpenShift Route |
route.host |
"" |
Custom hostname (blank = auto-generated) |
route.tls.termination |
edge |
TLS termination mode |
resources.requests.cpu |
250m |
CPU request |
resources.requests.memory |
512Mi |
Memory request |
resources.limits.cpu |
1 |
CPU limit |
resources.limits.memory |
1Gi |
Memory limit |
npm install
npm run devnpm install
npm run build
npm startThis implementation keeps MCP sessions in memory so a single replica is the simplest deployment shape. If you want to scale it horizontally, switch to stateless mode or add a shared session/event strategy.