fix: read max_output_tokens param from config#6591
fix: read max_output_tokens param from config#6591xiaoxiangmoe wants to merge 2 commits intoopenai:mainfrom
max_output_tokens param from config#6591Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
beb22d1 to
6235b78
Compare
etraut-openai
left a comment
There was a problem hiding this comment.
Thanks for the contribution. We prefer to keep changes as simple and surgical as possible to reduce regression risk and code churn. Your change makes code modifications that go beyond what is needed to add support for the model_max_output_tokens config option.
| let mut model_family = | ||
| find_family_for_model(&model).unwrap_or_else(|| derive_default_model_family(&model)); | ||
|
|
||
| if let Some(supports_reasoning_summaries) = cfg.model_supports_reasoning_summaries { |
There was a problem hiding this comment.
Most of the changes in this file seem unnecessary to implement this fix. Please simplify and make it more surgical.
| pub sandbox_mode: Option<SandboxMode>, | ||
| pub model_reasoning_effort: Option<ReasoningEffort>, | ||
| pub model_reasoning_summary: Option<ReasoningSummary>, | ||
| /// Optional verbosity control for GPT-5 models (Responses API `text.verbosity`). |
There was a problem hiding this comment.
Most of the changes in this file seem unnecessary to implement this fix. Please simplify and make it more surgical.
|
Closing in favor of another PR that addresses the same issue. |
|
@etraut-openai Hi, his implementation has an issue: |
|
@xiaoxiangmoe, we needed to revert the other PR because it broke use cases with the flagship models. So yes, we'll need a different solution. |
|
We've opted to drop support for |
|
@etraut-openai Why it broke use cases with the flagship models? |
|
@xiaoxiangmoe, supporting configuration knobs like this for older or more niche models is becoming a pretty big maintenance burden. We might consider supporting it if we get strong signal from the community. If you'd like, you could open a feature request, and we can see how many upvotes it gets. |
Request param max_output_tokens is documented in https://github.com/openai/codex/blob/main/docs/config.md,
but nowhere uses the item in config, this commit read it from config for GPT responses API.
see #4138 for issue report.