A lightweight local HTTPS proxy between your AI client and OpenCode.ai — automatically stripping the
claude-prefix from model names so requests route correctly.
OpenCode.ai's API expects model names without the claude- prefix. For example:
| What your client sends | What OpenCode.ai expects |
|---|---|
claude-sonnet-4-5 |
sonnet-4-5 |
claude-opus-4-7 |
opus-4-7 |
claude-haiku-4-5 |
haiku-4-5 |
Many AI tools (Claude Code, custom scripts, etc.) send the full claude-* model name. Without a fix, the request fails or routes to the wrong model.
This proxy runs locally and sits between your client and opencode.ai/zen. Every request passes through it, and if the JSON body contains a model name starting with claude-, the proxy strips that prefix before forwarding the request.
Your Client ──► Local Proxy (localhost:8787) ──► opencode.ai/zen
(strips "claude-" prefix)
The proxy is completely transparent — headers, streaming SSE responses, binary payloads, and all other data pass through unmodified.
1. Client sends POST /v1/messages { model: "claude-sonnet-4-5", ... }
│
2. Proxy intercepts the request
│
3. Parses the JSON body
│
4. Detects model starts with "claude-"
→ Rewrites to "sonnet-4-5"
│
5. Forwards to https://opencode.ai/zen/v1/messages
with the patched body
│
6. Streams the response back to the client (SSE-compatible)
- Model rewriting — Only rewrites models that start with
claude-. Other model names (e.g.,gpt-4o,gemini-pro) are forwarded unchanged. - Streaming — Natively streams SSE (Server-Sent Events) responses back to your client with no buffering.
- Non-JSON payloads — Binary or form-data requests are piped through untouched.
- GET/HEAD requests — No body processing; forwarded as-is.
- Root path — Visiting
/returns a plain-text response (useful for health checks). - Error handling — If the upstream fetch fails, the proxy returns a
502with a JSON error body.
The proxy runs via Wrangler's dev server with a trusted local HTTPS certificate. HTTPS is required because many AI clients refuse to send API keys over plain HTTP.
git clone https://github.com/CompileFutureYT/OpenCode-API-Model-Changer.git
cd OpenCode-API-Model-Changernpm installYou need mkcert to create a localhost certificate that your OS trusts.
# Install mkcert
brew install mkcert nss
# Trust the local Root CA
mkcert -install
# Generate the certificate (run inside the project folder)
mkcert localhost 127.0.0.1# Install mkcert — pick one:
winget install mkcert
# OR
choco install mkcert
# Trust the local Root CA
# (Windows will pop a security prompt — click Yes)
mkcert -install
# Generate the certificate (run inside the project folder)
mkcert localhost 127.0.0.1This creates two files in the project folder:
| File | Purpose |
|---|---|
localhost+1.pem |
The certificate |
localhost+1-key.pem |
The private key |
These files are already listed in
.gitignoreand will never be committed.
npm run devThe proxy starts at:
https://localhost:8787
You should see Wrangler confirm it is listening on that address. Leave this terminal open while you use the proxy.
Point your AI client's base URL at the local proxy instead of directly at OpenCode.ai.
Claude Code (~/.claude.json or settings):
API Base URL: https://localhost:8787/v1
API Key: your-opencode-api-key
Example with an OpenAI-compatible Python client:
import openai
client = openai.OpenAI(
base_url="https://localhost:8787/v1",
api_key="your-opencode-api-key",
)
response = client.chat.completions.create(
model="claude-sonnet-4-5", # Proxy strips "claude-" before forwarding
messages=[{"role": "user", "content": "Hello!"}]
)Built by CompileFuture. Subscribe for more AI tooling tutorials.