-
Notifications
You must be signed in to change notification settings - Fork 14
🤖 refactor: centralize model constants + fix previousResponseId bug #584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Fix GPT_MINI to use correct model ID (gpt-5.1-codex-mini) - Remove duplicated string constant exports (SONNET, GPT, etc.) - Callers now use KNOWN_MODELS.SONNET.id directly - Export KNOWN_MODELS object for direct access - Build MODEL_NAMES programmatically from KNOWN_MODELS - Groups by provider automatically - No manual duplication needed - Add integration test verifying all known models exist in models.json - Tests run outside IPC layer - Catches missing/renamed models early Generated with `mux`
Fixes APICallError when switching models or using non-reasoning models. Previously, we would pass previousResponseId from any previous OpenAI assistant message, causing errors when: - Switching between models (e.g., gpt-5.1-codex → gpt-5-pro) - Using models without reasoning support - Response IDs expired or invalid Now we: 1. Only extract previousResponseId when current model uses reasoning 2. Only use it if the previous message was from the same model 3. Stop searching if we encounter a different model (conversation context changed) This prevents "Previous response with id 'resp_...' not found" errors when the response ID is invalid for the current model/context. Generated with `mux`
- Add integration test for multi-turn conversations with reasoning models - Verifies previousResponseId is passed correctly between turns - Tests GPT_CODEX specifically to catch response ID bugs - Validates responseId exists in assistant message metadata - Update PROVIDER_CONFIGS to use KNOWN_MODELS constants - Ensures tests use the same model IDs as production code - Prevents drift between test models and known models Generated with `mux`
c3807ea to
563e808
Compare
563e808 to
8fb9ad1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
4d4a2a3 to
b141d20
Compare
b141d20 to
7ca2a67
Compare
_Generated with `mux`_
The E2E test failure was caused by MODEL_ABBREVIATIONS being sorted alphabetically, which made gpt-5.1-codex the first model in the LRU instead of the default anthropic:claude-sonnet-4-5. Fix: Ensure defaultModel is always first in DEFAULT_MODELS array. _Generated with `mux`_
Summary
Centralizes all model metadata into
knownModels.tsand fixes a bug wherepreviousResponseIdwas incorrectly passed between different models or contexts.Changes
1. Centralized Model Constants (
knownModels.ts)GPT_MINImodel ID: Changed togpt-5.1-codex-mini(correct LiteLLM name)provider+providerModelIdKNOWN_MODELS.SONNET.iddirectlyMODEL_NAMES: Auto-groups models by provider via.reduce()KnownModelKeyderived from object keys, never out of syncmodels.json2. Fixed
previousResponseIdBugProblem: We were passing
previousResponseIdfrom any previous OpenAI message, causing:APICallError: Previous response with id 'resp_...' not foundwhen switching modelsSolution: Now only pass
previousResponseIdwhen:reasoningEffortis set)3. Test Improvements
PROVIDER_CONFIGSto useKNOWN_MODELSconstantsresponseIdpersistence across conversation turnsBenefits
Testing
Generated with
mux