A Cloudflare Worker that provides OpenAI and Claude compatible API endpoints, powered by Cloudflare AI Gateway.
- 🔄 OpenAI API Compatibility:
/chat/completions,/models - 🤖 Claude API Compatibility:
/v1/messages - 🔐 Dual Authentication: Bearer token and URL-based auth (JetBrains compatible)
- 📊 Request Logging: Built-in request/response logging
- Node.js
- pnpm (or npm)
- Cloudflare account
-
Clone the repository
git clone <your-repo> cd cloudflare-ai-proxy
-
Install dependencies
pnpm install
-
Create
.dev.varsfileThe configuration uses JSON environment variables:
cat > .dev.vars << 'EOF' CF_GATEWAY_KEY=your-cloudflare-gateway-key PROXY_API_KEY=your-secret-token MODELS_CONFIG=[{"id":"google-ai-studio/gemini-flash-latest","name":"gemini-flash-latest","endpoint":"/v1/YOUR_ID/proxy/compat"}] EOF
Configuration Details:
MODELS_CONFIG: JSON array of model configurations[ { "id": "google-ai-studio/gemini-flash-latest", "name": "gemini-flash-latest", "endpoint": "/v1/YOUR_ID/proxy/compat" } ]
Replace
YOUR_IDwith your actual Cloudflare AI Gateway ID. -
Start development server
pnpm run dev
The server will start at
http://localhost:8787
Before deploying, you need to configure model connections in Cloudflare AI Gateway:
-
Go to Cloudflare Dashboard
- Navigate to
AI→AI Gateway - Create a new Gateway or select an existing one
- Note your Gateway ID (you'll need this for
MODELS_CONFIG)
- Navigate to
-
Configure Model Providers
- Add your model providers (Google AI Studio, OpenRouter, etc.)
- Set up API keys for each provider
- Test the connections to ensure they work
All environment variables should be configured on Cloudflare Dashboard (recommended).
-
Deploy project to worker
-
Go to Worker Settings
- Navigate to
Workers & Pages→ Select your Worker →Settings→Variables - Select the "Production" environment
- Navigate to
-
Configure Secrets (Encrypted)
Add the following as Encrypted variables:
PROXY_API_KEY: Your custom authentication token for API accessCF_GATEWAY_KEY: Your Cloudflare AI Gateway API key
-
Configure Model Mapping (Plaintext)
Add the following as Plaintext variable:
MODELS_CONFIG: JSON array mapping your models to AI Gateway endpoints
Format:
[ { "id": "google-ai-studio/gemini-2.5-flash", "name": "gemini-2.5-flash", "endpoint": "/v1/YOUR_GATEWAY_ID/proxy/compat" }, { "id": "x-ai/grok-beta", "name": "grok-beta", "endpoint": "/v1/YOUR_GATEWAY_ID/proxy/openrouter" } ]Important:
- Replace
YOUR_GATEWAY_IDwith your actual Cloudflare AI Gateway ID - The
idshould match the provider/model identifier you configured in AI Gateway - The
nameis what users will use in API requests - The
endpointpath should match your AI Gateway Universal Endpoint configuration
Use with any OpenAI-compatible client:
curl -H "Authorization: Bearer <PROXY_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.5-flash",
"messages": [{"role": "user", "content": "Hello!"}]
}' \
https://your-worker.workers.dev/chat/completionscurl -H "Authorization: Bearer <PROXY_API_KEY>" \
https://your-worker.workers.dev/modelscurl -H "Authorization: Bearer <PROXY_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.5-flash",
"messages": [{"role": "user", "content": "Hello!"}],
"max_tokens": 1024
}' \
https://your-worker.workers.dev/v1/messagesJetBrains IDEs (IntelliJ IDEA, PyCharm, WebStorm, etc.) support Local AI integration. Since JetBrains doesn't support custom headers, use the URL-based authentication endpoint.
-
Open Settings
- Go to
Settings→Tools→AI Assistant
- Go to
-
Add Custom Model Provider
- Click
+to add a new provider - Select "OpenAI Compatible"
- Click
-
Configure Base URL
https://your-worker.workers.dev/jb/<PROXY_API_KEY>Replace:
your-worker.workers.devwith your actual Worker URL<PROXY_API_KEY>with your PROXY_API_KEY value
-
Model Selection
- The models from your
MODELS_CONFIGwill be available - Select the model you want to use (e.g.,
gemini-2.5-flash)
- The models from your
-
Test Connection
- Use the "Test Connection" button to verify
- You should see a success message
Base URL: https://my-ai-proxy.workers.dev/jb/my-secret-token
- No API key field needed (authentication is in the URL)
- All standard JetBrains AI features work (code completion, chat, etc.)
- Supports streaming responses
src/
├── index.ts # Main entry point with auth and logging
├── constant.ts # Shared constants and model definitions
├── utils.ts # Utility functions
└── router/
├── OpenAIRouter.ts # OpenAI-compatible endpoints
└── ClaudeRouter.ts # Claude-compatible endpoints
| Variable | Description | Required | Storage Type |
|---|---|---|---|
CF_GATEWAY_KEY |
Cloudflare AI Gateway API key | Yes | Secret (via wrangler secret or dashboard) |
PROXY_API_KEY |
Custom token for API authentication | Yes | Secret (via wrangler secret or dashboard) |
MODELS_CONFIG |
Supported models configuration | Yes | Plaintext (in wrangler.jsonc or .dev.vars) |
MIT
