Skip to content

baiehclaca/bridgerapi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

bridgerapi

Turn any AI CLI into an OpenAI-compatible API — no API keys, no billing.

npm install -g bridgerapi
bridgerapi

What it does

bridgerapi runs a local HTTP server that speaks the OpenAI API format (/v1/chat/completions, /v1/models, /health). Any app that supports OpenAI-compatible endpoints — Goose, Cursor, Continue, Open WebUI, and others — can point at it and use whichever AI CLI you have authenticated on your machine.

You keep using the free tier or the subscription you already pay for. No Anthropic API key, no OpenAI API key, no extra cost.


Supported backends

CLI Install Auth
Claude Code claude.ai/download claude login
Gemini CLI npm i -g @google/gemini-cli gemini auth
Codex CLI npm i -g @openai/codex codex auth
GitHub Copilot gh extension install github/gh-copilot gh auth login
Droid (Factory.ai) factory.ai export FACTORY_API_KEY=fk-...

Install any one of these and bridgerapi will pick it up automatically.


Usage

Interactive setup (recommended)

bridgerapi

Detects installed backends, asks for a port, and gives you the choice to run in the foreground or install as a background service that auto-starts on login.

Manual commands

bridgerapi start              # run in foreground (default port 8082)
bridgerapi start --port 9000  # custom port
bridgerapi install            # install as background service
bridgerapi uninstall          # remove background service
bridgerapi status             # check if running
bridgerapi chat               # interactive chat session in terminal

Chat mode

bridgerapi chat

Opens an interactive REPL. Type messages, get streamed responses. Keeps conversation history across turns.


Connecting an app

Once running, point any OpenAI-compatible app at:

Setting Value
Base URL http://127.0.0.1:8082/v1
API Key local (any non-empty string)
Model any model your CLI supports

Goose

# ~/.config/goose/config.yaml
GOOSE_PROVIDER: openai
GOOSE_MODEL: claude-sonnet-4-6
OPENAI_HOST: http://127.0.0.1:8082

Continue (VS Code / JetBrains)

{
  "models": [{
    "title": "bridgerapi",
    "provider": "openai",
    "model": "claude-sonnet-4-6",
    "apiBase": "http://127.0.0.1:8082/v1",
    "apiKey": "local"
  }]
}

Open WebUI

Set OpenAI API Base URL to http://127.0.0.1:8082/v1 and API Key to local.


How it works

Your app  →  POST /v1/chat/completions
              ↓
          bridgerapi (local HTTP server)
              ↓
          claude / gemini / codex / gh copilot  (subprocess)
              ↓
          streamed response back to your app

bridgerapi converts OpenAI message format to a plain prompt, spawns the appropriate CLI as a subprocess using your existing auth, and streams the output back as SSE — exactly what the OpenAI streaming format expects.

Model routing is automatic by prefix:

  • claude-* → Claude Code CLI
  • gemini-* → Gemini CLI
  • gpt-*, o3, o4 → Codex CLI
  • copilot → GitHub Copilot CLI
  • glm-*, kimi-*, minimax-*, droid → Droid CLI (Factory.ai)

If the requested backend isn't available, it falls back to the first one that is.


Background service

On macOS, bridgerapi install creates a LaunchAgent that starts automatically on login and restarts if it crashes. Logs go to ~/.bridgerapi/server.log.

On Linux, it creates a systemd user service (systemctl --user).

bridgerapi install    # installs and starts
bridgerapi status     # check pid and port
bridgerapi uninstall  # stops and removes

Requirements

  • Node.js 18+
  • At least one AI CLI installed and authenticated

License

MIT — teodorwaltervido

About

Turn any AI CLI (Claude Code, Gemini, Codex, GitHub Copilot) into a local OpenAI-compatible API — no API keys needed

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages