The intelligent CLI-to-API proxy that routes your AI models where they belong.
Quick Start • Why LogicaProxy • AI Router • Dashboard • Vault • API • SDK • Deployment
You pay for CLI subscriptions — Claude Code, Gemini CLI, OpenAI Codex. You want to use those models in your own tools, SDKs, and applications. But CLI subscriptions don't give you API access.
LogicaProxy bridges that gap. It authenticates via OAuth to your CLI subscriptions and exposes standard API endpoints that any compatible client can consume. No separate API keys needed.
But unlike other proxy tools that just forward requests, LogicaProxy thinks about where your request should go.
Most CLI proxies are dumb pipes. They take a request and forward it. LogicaProxy is different.
Every request that hits LogicaProxy gets analyzed. The AI Router reads your prompt, classifies the task, and routes it to the optimal model — automatically.
| Task Type | What it detects | Routes to |
|---|---|---|
| Code | implement, debug, refactor, language keywords |
Best coding model |
| Reasoning | analyze, compare, strategy, architecture |
Strongest reasoning model |
| Fast | translate, summarize, list, simple |
Fastest model |
| Creative | write, story, blog, marketing |
Creative model |
| Analysis | data, metrics, forecast, report |
Analytical model |
Set your model to auto and let the router decide. Or pin a specific model — your call.
| Feature | LogicaProxy | Other proxies |
|---|---|---|
| Intelligent routing by task type | ✅ | ❌ |
| Built-in web dashboard | ✅ | ❌ |
| Encrypted credential storage | ✅ | ❌ |
| Prometheus metrics | ✅ | ❌ |
| Health & readiness probes | ✅ | ❌ |
| Multi-account load balancing | ✅ | ✅ |
| Multi-provider OAuth | ✅ | ✅ |
| Streaming & WebSocket | ✅ | ✅ |
| Protocol translation | ✅ | ✅ |
| Go SDK for embedding | ✅ | ✅ |
Connect your CLI subscriptions and use them as standard API endpoints:
| Provider | Auth Method | Models |
|---|---|---|
| Claude Code | Anthropic OAuth | claude-opus, claude-sonnet, claude-haiku |
| Gemini CLI | Google OAuth | gemini-pro, gemini-flash |
| OpenAI Codex | OpenAI OAuth | gpt-4o, gpt-4o-mini |
| Qwen Code | Alibaba OAuth | qwen-plus, qwen-turbo |
| iFlow | iFlow OAuth | iflow models |
| Antigravity | AG OAuth | ag models |
All providers support multi-account round-robin load balancing — add multiple accounts per provider to distribute load and avoid rate limits.
- Server-Sent Events (SSE) streaming
- WebSocket streaming for real-time responses
- Function calling / tool use
- Multimodal input (text + images)
- Thinking/reasoning mode support
One proxy speaks every format. Send requests in any format, receive responses in any format:
OpenAI format → ┌──────────────┐ → Claude API
Claude format → │ LogicaProxy │ → Gemini API
Gemini format → └──────────────┘ → OpenAI API
| Endpoint | Format | Use case |
|---|---|---|
/v1/chat/completions |
OpenAI | Most SDKs and tools |
/v1/messages |
Claude | Anthropic SDK |
/v1/responses |
OpenAI Responses | Streaming + WebSocket |
/v1beta/models/:model |
Gemini | Google AI SDK |
The AI Router is a middleware that classifies your prompt and selects the optimal model. It reads keywords and patterns in your request to determine the task type.
# config.yaml
ai-router:
enabled: true
default-model: claude-sonnet-4-6
routes:
code: claude-sonnet-4-6
reasoning: claude-opus-4-6
fast: claude-haiku-4-5-20251001
creative: claude-sonnet-4-6
analysis: claude-opus-4-6Send a request with "model": "auto" and the router picks the best model:
curl http://localhost:8317/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-api-key: your-key" \
-d '{
"model": "auto",
"messages": [{"role": "user", "content": "refactor this function to use async/await"}]
}'
# → Routes to code model (detected: refactor, function, async/await)Every response includes routing metadata:
X-AI-Category: code
X-AI-Route: claude-sonnet-4-6
LogicaProxy ships with a built-in web dashboard. No external dependencies, no separate installation, no configuration.
http://localhost:8317/dashboard/
- Status & Uptime — live health with sparkline charts
- Request metrics — total requests, error rate, avg latency, req/s
- Request volume chart — bar chart with hover tooltips showing timestamp and count
- Available models — auto-discovered from connected providers
- Per-endpoint breakdown — requests, errors, latency per route
- Recent requests log — last 10 requests in real-time
- Connection test — ping button with latency history
- Supported providers — status of all 6 providers
- API reference — complete endpoint documentation
- Dark / Light theme — toggle with localStorage persistence
The dashboard is embedded in the binary via Go's embed package — zero runtime dependencies.
OAuth tokens are sensitive. LogicaProxy includes a built-in encrypted vault for credential storage.
| Feature | Detail |
|---|---|
| Encryption | AES-256-GCM |
| Key derivation | SHA-256 from passphrase |
| Expiration | Automatic token expiry and purge |
| Concurrency | Thread-safe with RWMutex |
| Verification | Key fingerprint display |
| Operations | Store, Retrieve, Delete, List, PurgeExpired |
No more plaintext JSON files with your OAuth tokens sitting on disk.
git clone https://github.com/Rovemark/LogicaProxy.git
cd LogicaProxy
go build -o logicaproxy ./cmd/server
./logicaproxy -config config.example.yamldocker build -t logicaproxy .
docker run -p 8317:8317 \
-v ./config.yaml:/LogicaProxy/config.yaml \
-v ./auths:/root/.logicaproxy \
logicaproxydocker-compose up -d- Copy and edit the config:
cp config.example.yaml config.yaml- Add your API key:
host: "127.0.0.1"
port: 8317
api-keys:
- "your-secret-key"
auth-dir: "~/.logicaproxy"- Login to a provider:
# Claude Code
./logicaproxy -claude-login
# Gemini CLI
./logicaproxy -gemini-login
# OpenAI Codex
./logicaproxy -openai-login
# Qwen Code
./logicaproxy -qwen-login- Start the proxy:
./logicaproxy -config config.yaml- Open the dashboard:
http://localhost:8317/dashboard/
| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions |
POST |
OpenAI-compatible chat completions |
/v1/messages |
POST |
Claude Messages API |
/v1/messages/count_tokens |
POST |
Claude token counting |
/v1/completions |
POST |
Legacy completions |
/v1/responses |
POST WS |
OpenAI Responses API |
/v1/responses/compact |
POST |
Compact responses |
/v1/models |
GET |
List available models |
/v1beta/models/:model |
POST |
Gemini generate content |
| Endpoint | Method | Description |
|---|---|---|
/health |
GET |
Health check — returns status, version, uptime |
/ready |
GET |
Readiness probe for Kubernetes |
/metrics |
GET |
Prometheus-compatible metrics |
/dashboard/ |
GET |
Web dashboard UI |
/management.html |
GET |
Management control panel |
| Endpoint | Description |
|---|---|
/api/provider/{provider}/v1/messages |
Provider-specific messages |
/api/provider/{provider}/v1/chat/completions |
Provider-specific completions |
/api/provider/{provider}/v1beta/models/... |
Provider-specific generate |
Embed LogicaProxy in your own Go application:
package main
import "github.com/Rovemark/LogicaProxy/v6/sdk/cliproxy"
func main() {
service := cliproxy.NewBuilder().
WithConfig("config.yaml").
Build()
if err := service.Start(); err != nil {
panic(err)
}
}| Doc | Description |
|---|---|
| Usage Guide | Getting started with the SDK |
| Advanced | Custom executors and translators |
| Access Control | Authentication and authorization |
| Watcher | Hot-reload configuration |
| Examples | Working code examples |
Scrape the /metrics endpoint:
# prometheus.yml
scrape_configs:
- job_name: 'logicaproxy'
static_configs:
- targets: ['localhost:8317']Available metrics:
logicaproxy_uptime_seconds 3600.00
logicaproxy_requests_total 1542
logicaproxy_errors_total 3
logicaproxy_endpoint_requests_total{method="POST",path="/v1/messages"} 1200
logicaproxy_endpoint_errors_total{method="POST",path="/v1/messages"} 0
logicaproxy_endpoint_avg_latency_ms{method="POST",path="/v1/messages"} 2450
apiVersion: apps/v1
kind: Deployment
metadata:
name: logicaproxy
spec:
replicas: 1
selector:
matchLabels:
app: logicaproxy
template:
metadata:
labels:
app: logicaproxy
spec:
containers:
- name: logicaproxy
image: rovemark/logicaproxy:latest
ports:
- containerPort: 8317
livenessProbe:
httpGet:
path: /health
port: 8317
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /ready
port: 8317
initialDelaySeconds: 3
volumeMounts:
- name: config
mountPath: /LogicaProxy/config.yaml
subPath: config.yaml
volumes:
- name: config
configMap:
name: logicaproxy-config// ecosystem.config.js
module.exports = {
apps: [{
name: 'logicaproxy',
script: '/path/to/logicaproxy',
args: '-config /path/to/config.yaml',
autorestart: true,
restart_delay: 5000,
}]
};[Unit]
Description=LogicaProxy
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/bin/logicaproxy -config /etc/logicaproxy/config.yaml
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.targetClient Request
│
▼
┌─────────────────┐
│ API Gateway │ Gin HTTP server, CORS, auth middleware
├─────────────────┤
│ AI Router │ Task classification → model selection
├─────────────────┤
│ Metrics │ Request counting, latency tracking, Prometheus export
├─────────────────┤
│ Translator │ OpenAI ↔ Claude ↔ Gemini format conversion
├─────────────────┤
│ Executor │ Provider-specific OAuth + request forwarding
├─────────────────┤
│ Vault │ AES-256-GCM encrypted credential storage
├─────────────────┤
│ Watcher │ Hot-reload config and auth files (fsnotify)
└─────────────────┘
# Server
host: "127.0.0.1" # Bind address
port: 8317 # Listen port
debug: false # Debug logging
# Authentication
api-keys: # API keys for client auth
- "your-key-here"
auth-dir: "~/.logicaproxy" # OAuth token storage directory
# Request handling
request-retry: 2 # Retry failed requests
max-retry-interval: 30 # Max retry delay (seconds)
# AI Router
ai-router:
enabled: true
default-model: "" # Fallback model
routes:
code: "claude-sonnet-4-6"
reasoning: "claude-opus-4-6"
fast: "claude-haiku-4-5-20251001"
# Logging
request-log: true # Log requests
error-logs-max-files: 10 # Max error log files
# Advanced
commercial-mode: false # Disable middleware for high concurrency
websocket-auth: false # Require auth for WebSocket connectionsSee config.example.yaml for the complete reference.
Contributions are welcome. For major changes, please open an issue first.
# Fork and clone
git clone https://github.com/your-username/LogicaProxy.git
cd LogicaProxy
# Create feature branch
git checkout -b feature/your-feature
# Run tests
go test ./...
# Commit and push
git commit -m "feat: your feature"
git push origin feature/your-feature
# Open a Pull RequestMIT License — see LICENSE for details.
LogicaProxy by Rovemark
Open-source infrastructure for AI developers.