Conversation
…rate-limit (Gitlawb#822) Add GITHUB_MODEL env var support with higher priority than OPENAI_MODEL (backward-compatible fallback). Map default copilot alias to "auto" on the Copilot API for the ~10% discount. Show the resolved model name in the startup screen instead of the raw "github:copilot" alias. Parse x-ratelimit-* headers from GitHub API responses and surface them in the session cost summary at exit. Closes Gitlawb#822
…able and migrate from copilot aliases to explicit models
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
When using OpenClaude with
CLAUDE_CODE_USE_GITHUB=1, users previously had no way to specify which model to use (unlike every other provider), couldn't see the resolved model ID at startup, and had no visibility into rate-limit quota consumption. This PR addresses all three issues.Related Issues
Gitlawb#822
Changes
1.
GITHUB_MODELenv var supportGITHUB_MODELenv var with higher priority thanOPENAI_MODEL(backward-compatible fallback), consistent withGEMINI_MODEL,MISTRAL_MODEL, etc.resolveProviderRequest()now readsGITHUB_MODEL→OPENAI_MODEL→ default whenisGithubModecreateOpenAIShimClient()propagatesGITHUB_MODEL→OPENAI_MODELif the latter is unset2. Default model →
autofor Copilot API (10% discount)normalizeGithubCopilotModel()now mapscopilot, empty, andautoto the literal"auto"model ID, which gets the ~10% discount on the Copilot APInormalizeGithubModelsApiModel()mapsauto→gpt-4o(since the Models API doesn't support native "auto")3. Startup screen shows resolved model
detectProvider()inStartupScreen.tsnow usesresolveProviderRequest()to show the actual model ID (e.g.,auto,gpt-4.1) instead of the raw aliasgithub:copilotgetGithubProviderModel()inProviderManager.tsxnow checksGITHUB_MODELfirst4. Rate-limit header surfacing
githubRateLimit.tssingleton that parsesx-ratelimit-*headers from GitHub API responsesopenaiShim.tsto capture headers after every successful response in GitHub modeformatTotalCost()incost-tracker.tsappends the rate-limit summary when available5. Documentation
.env.exampleupdated withGITHUB_MODELandGITHUB_BASE_URLin both system-wide and provider sectionsTests
providerConfig.github.test.ts(added 13 new tests forGITHUB_MODEL,normalizeGithubCopilotModel, andautomapping)githubRateLimit.test.ts(new file: parsing, formatting, partial data, reset)Checklist