Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting
| _ if slug.starts_with("gpt-5-codex") => Some(ModelInfo { | ||
| context_window: 272_000, | ||
| max_output_tokens: 128_000, | ||
| auto_compact_token_limit: Some(250_000), | ||
| auto_compact_token_limit: Some(350_000), |
There was a problem hiding this comment.
Keep auto-compaction below context window
The new auto_compact_token_limit for gpt-5-codex is set to 350_000 while the same block still declares a context_window of 272_000. Compaction is only triggered when get_auto_compact_token_limit() is exceeded (see the check in core/src/codex.rs around token_limit_reached), so raising the limit above the context window means the code now waits until the conversation already exceeds the model’s documented capacity before summarizing. This will allow requests to hit OpenAI “context length exceeded” errors instead of compacting pre‑emptively. Either keep the threshold below the context window or update the window value consistently.
Useful? React with 👍 / 👎.
350k tokens for gpt-5-codex auto-compaction and update comments for better description