Thanks for building and sharing this project.
I added support for newer Opus, GPT and gemini models in my fork and verified them with Claude Code:
claude-opus-4.7 with adaptive thinking (low, medium, high, xhigh, max)
gpt-5.4
gpt-5.3-codex
gpt-5.4-mini
gemini-3.1-pro
gemini-3-flash
This also covers the max_tokens / max_completion_tokens issue that affects gpt-5.3-codex and gpt-5.4-mini.
You can use it with:
npx betahi-copilot-api@latest start
Example Claude Code config:
{
"env": {
"ANTHROPIC_BASE_URL": "http://localhost:4141",
"ANTHROPIC_AUTH_TOKEN": "dummy",
"ANTHROPIC_MODEL": "gpt-5.3-codex",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "gpt-5.3-codex",
"ANTHROPIC_SMALL_FAST_MODEL": "gpt-5.4-mini",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-5.4-mini",
"COPILOT_REASONING_EFFORT": "medium",
"DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
}
}
Supported reasoning effort:
- claude-opus-4.7:
low, medium, high, xhigh, max
- gpt-5.4:
low, medium, high, xhigh
- gpt-5.3-codex:
low, medium, high, xhigh
- gpt-5.4-mini:
none, low, medium
Repo: https://github.com/betaHi/copilot-api
README: https://github.com/betaHi/copilot-api?tab=readme-ov-file#gpt-and-gemini-support
If anyone tries it, feedback is welcome.
Thanks for building and sharing this project.
I added support for newer Opus, GPT and gemini models in my fork and verified them with Claude Code:
claude-opus-4.7with adaptive thinking (low, medium, high, xhigh, max)gpt-5.4gpt-5.3-codexgpt-5.4-minigemini-3.1-progemini-3-flashThis also covers the
max_tokens/max_completion_tokensissue that affectsgpt-5.3-codexandgpt-5.4-mini.You can use it with:
Example Claude Code config:
Supported reasoning effort:
low, medium, high, xhigh, maxlow, medium, high, xhighlow, medium, high, xhighnone, low, mediumRepo: https://github.com/betaHi/copilot-api
README: https://github.com/betaHi/copilot-api?tab=readme-ov-file#gpt-and-gemini-support
If anyone tries it, feedback is welcome.