Skip to content

CompileFutureYT/OpenCode-API-Model-Changer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenCode API Model Changer

A lightweight local HTTPS proxy between your AI client and OpenCode.ai — automatically stripping the claude- prefix from model names so requests route correctly.

By CompileFuture


The Problem

OpenCode.ai's API expects model names without the claude- prefix. For example:

What your client sends What OpenCode.ai expects
claude-sonnet-4-5 sonnet-4-5
claude-opus-4-7 opus-4-7
claude-haiku-4-5 haiku-4-5

Many AI tools (Claude Code, custom scripts, etc.) send the full claude-* model name. Without a fix, the request fails or routes to the wrong model.


The Solution

This proxy runs locally and sits between your client and opencode.ai/zen. Every request passes through it, and if the JSON body contains a model name starting with claude-, the proxy strips that prefix before forwarding the request.

Your Client  ──►  Local Proxy (localhost:8787)  ──►  opencode.ai/zen
                  (strips "claude-" prefix)

The proxy is completely transparent — headers, streaming SSE responses, binary payloads, and all other data pass through unmodified.


How It Works

Request Flow

1. Client sends POST /v1/messages  { model: "claude-sonnet-4-5", ... }
                    │
2. Proxy intercepts the request
                    │
3. Parses the JSON body
                    │
4. Detects model starts with "claude-"
   → Rewrites to "sonnet-4-5"
                    │
5. Forwards to https://opencode.ai/zen/v1/messages
   with the patched body
                    │
6. Streams the response back to the client (SSE-compatible)

Key Behaviors

  • Model rewriting — Only rewrites models that start with claude-. Other model names (e.g., gpt-4o, gemini-pro) are forwarded unchanged.
  • Streaming — Natively streams SSE (Server-Sent Events) responses back to your client with no buffering.
  • Non-JSON payloads — Binary or form-data requests are piped through untouched.
  • GET/HEAD requests — No body processing; forwarded as-is.
  • Root path — Visiting / returns a plain-text response (useful for health checks).
  • Error handling — If the upstream fetch fails, the proxy returns a 502 with a JSON error body.

Run Locally with HTTPS

The proxy runs via Wrangler's dev server with a trusted local HTTPS certificate. HTTPS is required because many AI clients refuse to send API keys over plain HTTP.

Step 1 — Clone the repo

git clone https://github.com/CompileFutureYT/OpenCode-API-Model-Changer.git
cd OpenCode-API-Model-Changer

Step 2 — Install dependencies

npm install

Step 3 — Generate a trusted local certificate

You need mkcert to create a localhost certificate that your OS trusts.

macOS

# Install mkcert
brew install mkcert nss

# Trust the local Root CA
mkcert -install

# Generate the certificate (run inside the project folder)
mkcert localhost 127.0.0.1

Windows (PowerShell as Administrator)

# Install mkcert — pick one:
winget install mkcert
# OR
choco install mkcert

# Trust the local Root CA
# (Windows will pop a security prompt — click Yes)
mkcert -install

# Generate the certificate (run inside the project folder)
mkcert localhost 127.0.0.1

This creates two files in the project folder:

File Purpose
localhost+1.pem The certificate
localhost+1-key.pem The private key

These files are already listed in .gitignore and will never be committed.

Step 4 — Start the proxy

npm run dev

The proxy starts at:

https://localhost:8787

You should see Wrangler confirm it is listening on that address. Leave this terminal open while you use the proxy.


Usage

Point your AI client's base URL at the local proxy instead of directly at OpenCode.ai.

Claude Code (~/.claude.json or settings):

API Base URL: https://localhost:8787/v1
API Key:      your-opencode-api-key

Example with an OpenAI-compatible Python client:

import openai

client = openai.OpenAI(
    base_url="https://localhost:8787/v1",
    api_key="your-opencode-api-key",
)

response = client.chat.completions.create(
    model="claude-sonnet-4-5",   # Proxy strips "claude-" before forwarding
    messages=[{"role": "user", "content": "Hello!"}]
)

Credits

Built by CompileFuture. Subscribe for more AI tooling tutorials.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors