npm install -g commandcode-proxyA lightweight, zero-dependency proxy server that exposes CommandCode AI models through OpenAI-compatible and Anthropic-compatible API endpoints. Use it to connect tools like Cursor, Claude Code, Continue, Cody, or any OpenAI/Anthropic SDK client to CommandCode's model gateway.
- Dual API compatibility — serves both OpenAI (
/v1/chat/completions) and Anthropic (/v1/messages) formats from a single server. - Model routing — automatically remaps Claude and GPT model names to the configured CommandCode model so clients work without changes.
- Vision pipeline — detects images in requests, routes them to a vision-capable model for description, then passes the text to the coding model. Includes an in-memory description cache to avoid re-analyzing the same image.
- Streaming support — full SSE streaming for both OpenAI and Anthropic protocols.
- Tool/function calling — translates OpenAI function-calling and Anthropic tool-use formats to CommandCode's tool schema and back.
- Retry with back-off — retries 429 / 5xx errors up to 5 times with exponential delay.
- Auth gating — requires a user-defined API key on all
/v1/*endpoints; health checks are public. - CORS enabled — allows browser-based clients out of the box.
- Zero dependencies — runs on Node.js built-in modules only; no
npm installrequired. - Docker ready — includes
Dockerfileanddocker-compose.ymlfor one-command deployment.
| Requirement | Minimum Version |
|---|---|
| Node.js | 18.0.0+ |
| Docker (optional) | 20.10+ |
| CommandCode account | Free or paid — sign up at commandcode.ai |
You need a CommandCode API key. Get one by either:
- Running the
commandcodeCLI and completing the auth flow (key is saved to~/.commandcode/auth.json), or - Copying the key from the CommandCode dashboard and setting it as
CC_API_KEY.
# Install globally
npm install -g commandcode-proxy
# Set your keys
export PROXY_API_KEY=$(openssl rand -hex 32)
export CC_API_KEY=your-commandcode-api-key
# Start the proxy
commandcode-proxyOr run it directly without installing:
PROXY_API_KEY=my-secret-key CC_API_KEY=your-cc-key npx commandcode-proxygit clone https://github.com/zahidhussaina2l/commandcode-proxy.git
cd commandcode-proxy
cp .env.example .envEdit .env and set the two required values:
# A secret key clients must send to use this proxy (choose any random string)
PROXY_API_KEY=my-secret-proxy-key
# Your CommandCode API key (or leave unset to read from ~/.commandcode/auth.json)
CC_API_KEY=your-commandcode-api-keyTip: Generate a strong random key with
openssl rand -hex 32.
Start the proxy:
node proxy.mjsYou should see:
CommandCode AI Proxy v1.0.0
===========================
Listening on 0.0.0.0:8787
Auth: ENABLED (API key required)
docker build -t commandcode-proxy .
docker run -d \
--name commandcode-proxy \
-p 8787:8787 \
-e PROXY_API_KEY=my-secret-proxy-key \
-e CC_API_KEY=your-commandcode-api-key \
commandcode-proxycp .env.example .env
# edit .env with your keys
docker compose up -dTo stop:
docker compose downIf you prefer to use the ~/.commandcode/auth.json file from your host machine instead of setting CC_API_KEY:
docker run -d \
--name commandcode-proxy \
-p 8787:8787 \
-e PROXY_API_KEY=my-secret-proxy-key \
-v "$HOME/.commandcode:/home/node/.commandcode:ro" \
commandcode-proxyCommandCode requires an API key to access its model gateway. Here's how to set it up:
- Install the CommandCode CLI:
npm install -g commandcode- Run the auth flow:
commandcode-
Complete the browser-based authentication. The CLI saves your key to
~/.commandcode/auth.json. -
The proxy reads this file automatically — no need to set
CC_API_KEY.
- Log in to commandcode.ai and navigate to your API settings.
- Copy your API key.
- Set it in your
.envfile:
CC_API_KEY=sk-cc-your-key-here- Open Cursor Settings (
Cmd+,/Ctrl+,). - Go to Models > OpenAI API Key.
- Set:
- OpenAI Base URL:
http://YOUR_HOST:8787/v1 - OpenAI API Key: your
PROXY_API_KEYvalue
- OpenAI Base URL:
Cursor will now route all model requests through the proxy.
Set these environment variables before running Claude Code:
export ANTHROPIC_BASE_URL=http://YOUR_HOST:8787/v1
export ANTHROPIC_API_KEY=your-proxy-api-keyOr add them to your shell profile (~/.bashrc, ~/.zshrc, etc.).
from openai import OpenAI
client = OpenAI(
base_url="http://YOUR_HOST:8787/v1",
api_key="your-proxy-api-key",
)
response = client.chat.completions.create(
model="deepseek/deepseek-v4-pro",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
baseURL: "http://YOUR_HOST:8787/v1",
apiKey: "your-proxy-api-key",
});
const completion = await client.chat.completions.create({
model: "deepseek/deepseek-v4-pro",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(completion.choices[0].message.content);import anthropic
client = anthropic.Anthropic(
base_url="http://YOUR_HOST:8787/v1",
api_key="your-proxy-api-key",
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
print(message.content[0].text)# OpenAI format
curl http://localhost:8787/v1/chat/completions \
-H "Authorization: Bearer your-proxy-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-v4-pro",
"messages": [{"role": "user", "content": "Hello!"}]
}'
# Anthropic format
curl http://localhost:8787/v1/messages \
-H "x-api-key: your-proxy-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello!"}]
}'| Method | Path | Description |
|---|---|---|
GET |
/ or /health |
Health check — returns proxy status, available models, and account info |
GET |
/v1/models |
List available models (OpenAI format) |
POST |
/v1/chat/completions |
Chat completions (OpenAI format) |
POST |
/v1/messages |
Messages (Anthropic format) |
HEAD |
/ |
Health check (used by Claude Code) |
OPTIONS |
* |
CORS preflight |
All /v1/* endpoints require authentication via Authorization: Bearer <key> or x-api-key: <key>.
The proxy supports these models through CommandCode's gateway:
| Model | Provider | Category |
|---|---|---|
deepseek/deepseek-v4-pro |
DeepSeek | Open-source |
deepseek/deepseek-v4-flash |
DeepSeek | Open-source |
moonshotai/Kimi-K2.5 |
Moonshot | Open-source |
moonshotai/Kimi-K2.6 |
Moonshot | Open-source |
zai-org/GLM-5 |
Zhipu AI | Open-source |
zai-org/GLM-5.1 |
Zhipu AI | Open-source |
MiniMaxAI/MiniMax-M2.5 |
MiniMax | Open-source |
MiniMaxAI/MiniMax-M2.7 |
MiniMax | Open-source |
Qwen/Qwen3.6-Max-Preview |
Alibaba | Open-source |
Qwen/Qwen3.6-Plus |
Alibaba | Open-source |
stepfun/Step-3.5-Flash |
StepFun | Open-source |
claude-sonnet-4-6 |
Anthropic* | Premium |
claude-opus-4-7 |
Anthropic* | Premium |
gpt-5.5 |
OpenAI* | Premium |
* Premium models (Claude, GPT) are automatically remapped to the configured default model (
PROXY_DEFAULT_MODEL). The response preserves the original model name so clients don't reject the response.
Short aliases are supported — e.g. deepseek-v4-pro resolves to deepseek/deepseek-v4-pro, kimi-k2.5 resolves to moonshotai/Kimi-K2.5, etc.
| Variable | Required | Default | Description |
|---|---|---|---|
PROXY_API_KEY |
Yes | — | Secret key clients must send to authenticate |
CC_API_KEY |
No | Read from ~/.commandcode/auth.json |
CommandCode API key |
PROXY_PORT |
No | 8787 |
Port the server listens on |
PROXY_HOST |
No | 0.0.0.0 |
Host/interface to bind to |
PROXY_PUBLIC_URL |
No | http://localhost:$PORT |
Public URL shown in the startup banner |
PROXY_DEFAULT_MODEL |
No | deepseek/deepseek-v4-pro |
Default model for all routed requests |
PROXY_VISION_MODEL |
No | moonshotai/Kimi-K2.6 |
Model used for image analysis |
PROXY_CODING_MODEL |
No | Same as default | Model used for coding tasks |
CC_API_BASE |
No | https://api.commandcode.ai |
CommandCode API base URL |
CC_CLI_VERSION |
No | 0.26.3 |
CLI version sent in request headers |
CC_WORKING_DIR |
No | Current working directory | Working directory sent in request config |
┌──────────────┐ ┌─────────────────────┐ ┌──────────────────┐
│ Client │ ──────> │ commandcode-proxy │ ──────> │ CommandCode API │
│ (Cursor, SDK) │ <────── │ :8787 │ <────── │ api.commandcode │
└──────────────┘ └─────────────────────┘ └──────────────────┘
OpenAI or Translates format, Returns model
Anthropic routes models, responses via
API format handles vision streaming events
- Client sends a request in OpenAI or Anthropic format.
- Auth check — the proxy validates the
PROXY_API_KEY. - Model resolution — Claude/GPT model names are remapped to the configured default model.
- Vision pipeline (if images are present) — images are sent to the vision model for description, then the text descriptions replace the images before forwarding to the coding model.
- Format translation — the request is converted to CommandCode's internal format.
- Forwarding — the request is sent to CommandCode's API with retry logic.
- Response translation — the CommandCode streaming response is translated back to OpenAI or Anthropic SSE format.
| Platform | Status |
|---|---|
| Linux (x86_64, ARM64) | Fully supported |
| macOS (Intel, Apple Silicon) | Fully supported |
| Windows (native Node.js) | Fully supported |
| Docker (any architecture) | Fully supported |
| WSL / WSL2 | Fully supported |
The proxy uses only Node.js built-in modules and has zero npm dependencies, so it runs anywhere Node.js 18+ is available.
# All tests (unit + integration)
npm test
# Unit tests only (fast, no network)
npm run test:unit
# Integration tests only (starts proxy + mock server)
npm run test:integration
# Docker E2E tests (requires Docker; builds image, starts container, tests all endpoints)
npm run test:dockerTests use Node.js built-in test runner (node:test) — no extra dependencies needed.
# Start with auto-reload on file changes
npm run devSet the PROXY_API_KEY environment variable. This is the key that clients use to authenticate with the proxy (not your CommandCode key).
export PROXY_API_KEY=my-secret-keyEither:
- Run
commandcodeCLI to complete the auth flow, or - Set
CC_API_KEYin your environment /.envfile.
Your CommandCode API key is invalid or expired. Re-run the commandcode CLI auth flow or update CC_API_KEY.
You've hit a rate limit. The proxy automatically retries with exponential back-off (up to 5 times). If this persists, check your CommandCode plan limits.
- Make sure the proxy is running and reachable from your machine.
- Verify the base URL includes
/v1— e.g.http://localhost:8787/v1. - Check that the API key in your client config matches
PROXY_API_KEY. - For Claude Code, the path normalization handles
/v1/v1/messagesautomatically.
Check logs with docker logs commandcode-proxy. The most common cause is a missing PROXY_API_KEY environment variable.
- Never commit your
.envfile. It's listed in.gitignore. - The
PROXY_API_KEYis only partially printed in the startup banner (first 8 and last 4 characters). - The proxy does not log request/response bodies — only method, path, model, and message counts.
- When exposing the proxy to the internet, use a reverse proxy (nginx, Caddy) with TLS.
This project is provided for educational and research purposes only. It is an independent, community-built tool and is not affiliated with, endorsed by, or officially supported by CommandCode, Anthropic, OpenAI, or any other AI provider mentioned in this repository. Use of this software is at your own risk. The authors assume no liability for any misuse, service disruptions, or violations of third-party terms of service. You are solely responsible for ensuring your usage complies with all applicable terms of service, licenses, and laws.