Turn any AI CLI into an OpenAI-compatible API — no API keys, no billing.
npm install -g bridgerapi
bridgerapi
bridgerapi runs a local HTTP server that speaks the OpenAI API format (/v1/chat/completions, /v1/models, /health). Any app that supports OpenAI-compatible endpoints — Goose, Cursor, Continue, Open WebUI, and others — can point at it and use whichever AI CLI you have authenticated on your machine.
You keep using the free tier or the subscription you already pay for. No Anthropic API key, no OpenAI API key, no extra cost.
| CLI | Install | Auth |
|---|---|---|
| Claude Code | claude.ai/download | claude login |
| Gemini CLI | npm i -g @google/gemini-cli |
gemini auth |
| Codex CLI | npm i -g @openai/codex |
codex auth |
| GitHub Copilot | gh extension install github/gh-copilot |
gh auth login |
| Droid (Factory.ai) | factory.ai | export FACTORY_API_KEY=fk-... |
Install any one of these and bridgerapi will pick it up automatically.
bridgerapi
Detects installed backends, asks for a port, and gives you the choice to run in the foreground or install as a background service that auto-starts on login.
bridgerapi start # run in foreground (default port 8082)
bridgerapi start --port 9000 # custom port
bridgerapi install # install as background service
bridgerapi uninstall # remove background service
bridgerapi status # check if running
bridgerapi chat # interactive chat session in terminal
bridgerapi chat
Opens an interactive REPL. Type messages, get streamed responses. Keeps conversation history across turns.
Once running, point any OpenAI-compatible app at:
| Setting | Value |
|---|---|
| Base URL | http://127.0.0.1:8082/v1 |
| API Key | local (any non-empty string) |
| Model | any model your CLI supports |
# ~/.config/goose/config.yaml
GOOSE_PROVIDER: openai
GOOSE_MODEL: claude-sonnet-4-6
OPENAI_HOST: http://127.0.0.1:8082{
"models": [{
"title": "bridgerapi",
"provider": "openai",
"model": "claude-sonnet-4-6",
"apiBase": "http://127.0.0.1:8082/v1",
"apiKey": "local"
}]
}Set OpenAI API Base URL to http://127.0.0.1:8082/v1 and API Key to local.
Your app → POST /v1/chat/completions
↓
bridgerapi (local HTTP server)
↓
claude / gemini / codex / gh copilot (subprocess)
↓
streamed response back to your app
bridgerapi converts OpenAI message format to a plain prompt, spawns the appropriate CLI as a subprocess using your existing auth, and streams the output back as SSE — exactly what the OpenAI streaming format expects.
Model routing is automatic by prefix:
claude-*→ Claude Code CLIgemini-*→ Gemini CLIgpt-*,o3,o4→ Codex CLIcopilot→ GitHub Copilot CLIglm-*,kimi-*,minimax-*,droid→ Droid CLI (Factory.ai)
If the requested backend isn't available, it falls back to the first one that is.
On macOS, bridgerapi install creates a LaunchAgent that starts automatically on login and restarts if it crashes. Logs go to ~/.bridgerapi/server.log.
On Linux, it creates a systemd user service (systemctl --user).
bridgerapi install # installs and starts
bridgerapi status # check pid and port
bridgerapi uninstall # stops and removes
- Node.js 18+
- At least one AI CLI installed and authenticated
MIT — teodorwaltervido