OpenCode OpenAI Proxy is a small local bridge that lets OpenAI-compatible clients talk to OpenCode Go or OpenCode Zen.
It accepts OpenAI chat completion requests on POST /v1/chat/completions and forwards them to OpenCode Go or Zen.
- Serves a local OpenAI-compatible API on
127.0.0.1:11435. - Accepts
POST /v1/chat/completionsin OpenAI format. - Supports streaming and non-streaming responses.
- Exposes
/healthfor quick checks. - Exposes
/v1/modelswith direct OpenCode models and optional OpenAI route model names. - Reads model inventory from
models-go.jsonormodels-zen.json. - Reads OpenAI model routing from
routes-openai-go.jsonorroutes-openai-zen.json. - Keeps the real OpenCode API key in
OPENCODE_API_KEY, outside the repo.
OpenCode Go and Zen use the OpenAI chat completions format. Some coding clients expect a local OpenAI-compatible endpoint. This proxy sits between them:
Cursor / VS Code / Any OpenAI client -> localhost:11435 -> OpenCode OpenAI Proxy -> OpenCode Go / Zen
Clients can send either direct OpenCode model IDs or real OpenAI model IDs from the route file:
- Cursor can send
deepseek-v4-pro,qwen3.6-plus,gpt-5.5, or any model exposed by/v1/models. - Cursor can also send routed OpenAI names such as
gpt-5.2,gpt-4.1, oro3. - VS Code extensions send whatever model ID they're configured for.
- Any OpenAI-compatible client works.
Direct OpenCode model IDs are forwarded as-is. OpenAI route model IDs are mapped through the selected routes-openai-*.json file.
In short:
- Clients talk OpenAI format to localhost.
- The proxy accepts direct Go/Zen model IDs and routed OpenAI model IDs.
- No format conversion needed — both sides speak OpenAI.
- Node.js 18 or newer.
- An OpenCode API key in
OPENCODE_API_KEY. - Subscription tier in
OPENCODE_TIER:go(default) orzen.
git clone https://github.com/bigdata2211it-web/opencode-openai-proxy.git
cd opencode-openai-proxy
cp .env.example .env
# Edit .env and set OPENCODE_API_KEY.
export OPENCODE_API_KEY=<your-opencode-key>
# Optional: switch from Go to Zen subscription (default: go)
export OPENCODE_TIER=zen
node index.jsThe proxy starts on http://127.0.0.1:11435 by default.
To use another port:
node index.js 11436curl http://127.0.0.1:11435/healthPoint OpenAI-compatible clients at the local proxy:
export OPENAI_BASE_URL=http://127.0.0.1:11435
export OPENAI_API_KEY=sk-dummyUse one of the model IDs returned by /v1/models.
OPENAI_API_KEY is only a local client placeholder here. It is not the real OpenCode key and it is not routed or exchanged for a secret. Many OpenAI-compatible clients require this variable to be non-empty, so sk-dummy is enough.
The real secret stays only in the proxy process:
export OPENCODE_API_KEY=<your-real-opencode-key>Request flow:
OpenAI-compatible client
OPENAI_BASE_URL=http://127.0.0.1:11435
OPENAI_API_KEY=sk-dummy
|
v
local proxy
OPENCODE_API_KEY=<your-real-opencode-key>
|
v
OpenCode Go / Zen
This proxy can be used as a local OpenAI-compatible provider for clients that support a custom base URL, including coding CLIs, IDE extensions, and local tools such as Codex, OpenCode, Cursor, Roo Code, Cline, Continue, aider, and LiteLLM.
The client only needs to know:
- Base URL:
http://127.0.0.1:11435 - API key: any non-empty local placeholder, for example
sk-dummy - Model: either a direct OpenCode model from
models-go.json/models-zen.json, or an OpenAI route name fromroutes-openai-go.json/routes-openai-zen.json
Examples:
qwen3.6-plus # direct Go/Zen model
deepseek-v4-pro # direct Go/Zen model
gpt-5-codex # routed OpenAI model name
gpt-4o # routed OpenAI model name
o3 # routed OpenAI model name
From the client's point of view, this proxy behaves like an OpenAI provider on localhost. The real OpenCode key stays in the proxy process as OPENCODE_API_KEY; clients do not need the real OpenCode key.
Edit models-go.json for OPENCODE_TIER=go or models-zen.json for OPENCODE_TIER=zen.
These files contain direct OpenCode model IDs:
{
"models": ["deepseek-v4-pro", "qwen3.6-plus"]
}Edit routes-openai-go.json or routes-openai-zen.json to map real OpenAI model IDs to the active tier's models:
{
"gpt-5.2": "deepseek-v4-pro",
"gpt-4.1": "qwen3.6-plus"
}Route targets must exist in the matching models-*.json. The shipped route files cover every tier model once, without duplicate targets. Common context suffixes are accepted for both direct model IDs and routed OpenAI IDs, for example deepseek-v4-pro[200k] or gpt-5.2[200k].
The shipped configs are based on the current OpenCode /v1/models endpoints:
models-go.jsoncovers OpenCode Go models.models-zen.jsoncovers OpenCode Zen models.routes-openai-go.jsonmaps real OpenAI model IDs to Go models.routes-openai-zen.jsonmaps real OpenAI model IDs to Zen models.
HEAD /andHEAD /v1- connection checks.GET /health- proxy status.GET /v1/models- available direct models and OpenAI route names.POST /v1/chat/completions- OpenAI-compatible chat endpoint.
Create a local .env from .env.example, or provide the variables another way:
OPENCODE_API_KEY=<your-opencode-key>
OPENCODE_TIER=go # or: zenDo not commit .env; it is ignored by git.
| Tier | OPENCODE_TIER |
Endpoint | Pricing |
|---|---|---|---|
| Go | go (default) |
https://opencode.ai/zen/go/v1/chat/completions |
$5 first month, then $10/month (flat) |
| Zen | zen |
https://opencode.ai/zen/v1/chat/completions |
Pay-as-you-go, no limits |
Both tiers use the same API key. Go and Zen model lists are kept separately in their JSON files, because OpenCode can change each tier independently.
To switch tiers, change OPENCODE_TIER and restart the proxy.
For free AI tools, news, and project updates, subscribe to the Telegram channel:
For direct questions or feedback, message:
This project is intentionally small: one Node.js entrypoint, tier-specific model config files, and no external runtime dependencies.
Supports both OpenCode Go (flat subscription) and OpenCode Zen (pay-as-you-go) via OPENCODE_TIER.
No public license has been selected yet. The repository is public, but reuse rights are not granted automatically.