Skip to content

fix: read max_output_tokens param from config#6591

Closed
xiaoxiangmoe wants to merge 2 commits intoopenai:mainfrom
xiaoxiangmoe:feat/max_output_tokens
Closed

fix: read max_output_tokens param from config#6591
xiaoxiangmoe wants to merge 2 commits intoopenai:mainfrom
xiaoxiangmoe:feat/max_output_tokens

Conversation

@xiaoxiangmoe
Copy link
Copy Markdown

Request param max_output_tokens is documented in https://github.com/openai/codex/blob/main/docs/config.md,
but nowhere uses the item in config, this commit read it from config for GPT responses API.

see #4138 for issue report.

Copy link
Copy Markdown
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@xiaoxiangmoe xiaoxiangmoe force-pushed the feat/max_output_tokens branch from beb22d1 to 6235b78 Compare November 13, 2025 07:40
Copy link
Copy Markdown
Collaborator

@etraut-openai etraut-openai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution. We prefer to keep changes as simple and surgical as possible to reduce regression risk and code churn. Your change makes code modifications that go beyond what is needed to add support for the model_max_output_tokens config option.

let mut model_family =
find_family_for_model(&model).unwrap_or_else(|| derive_default_model_family(&model));

if let Some(supports_reasoning_summaries) = cfg.model_supports_reasoning_summaries {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of the changes in this file seem unnecessary to implement this fix. Please simplify and make it more surgical.

pub sandbox_mode: Option<SandboxMode>,
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
/// Optional verbosity control for GPT-5 models (Responses API `text.verbosity`).
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of the changes in this file seem unnecessary to implement this fix. Please simplify and make it more surgical.

@etraut-openai etraut-openai added the needs-response Additional information is requested label Nov 18, 2025
@etraut-openai
Copy link
Copy Markdown
Collaborator

Closing in favor of another PR that addresses the same issue.

@etraut-openai etraut-openai removed the needs-response Additional information is requested label Nov 21, 2025
@xiaoxiangmoe
Copy link
Copy Markdown
Author

@etraut-openai Hi, his implementation has an issue:
We cannot configure max_output_tokens in ConfigProfile, just as fields like model_auto_compact_token_limit, model_reasoning_summary_format cannot currently be configured in ConfigProfile.
Do I need to submit a separate PR to fix this?

@etraut-openai
Copy link
Copy Markdown
Collaborator

@xiaoxiangmoe, we needed to revert the other PR because it broke use cases with the flagship models. So yes, we'll need a different solution.

@etraut-openai
Copy link
Copy Markdown
Collaborator

We've opted to drop support for max_output_tokens: #7100

@xiaoxiangmoe
Copy link
Copy Markdown
Author

xiaoxiangmoe commented Nov 22, 2025

@etraut-openai Why it broke use cases with the flagship models?
If I can added feature again without break current usecases, could I added it again?

@etraut-openai
Copy link
Copy Markdown
Collaborator

@xiaoxiangmoe, supporting configuration knobs like this for older or more niche models is becoming a pretty big maintenance burden. We might consider supporting it if we get strong signal from the community. If you'd like, you could open a feature request, and we can see how many upvotes it gets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants