Skip to content

fix: read max_output_tokens param from config#4139

Merged
etraut-openai merged 1 commit intoopenai:mainfrom
shallowclouds:fix/responses-api-max-output-tokens
Nov 21, 2025
Merged

fix: read max_output_tokens param from config#4139
etraut-openai merged 1 commit intoopenai:mainfrom
shallowclouds:fix/responses-api-max-output-tokens

Conversation

@shallowclouds
Copy link
Copy Markdown
Contributor

Request param max_output_tokens is documented in https://github.com/openai/codex/blob/main/docs/config.md,
but nowhere uses the item in config, this commit read it from config for GPT responses API.

see #4138 for issue report.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Sep 24, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

Copy link
Copy Markdown
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex Review: Here are some suggestions.

About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".

@shallowclouds
Copy link
Copy Markdown
Contributor Author

I have read the CLA Document and I hereby sign the CLA

github-actions bot added a commit that referenced this pull request Sep 24, 2025
@shallowclouds shallowclouds force-pushed the fix/responses-api-max-output-tokens branch 4 times, most recently from 7864a8e to 62d7b0e Compare September 25, 2025 13:12
@shallowclouds shallowclouds force-pushed the fix/responses-api-max-output-tokens branch from 62d7b0e to a41e334 Compare October 9, 2025 03:22
@shallowclouds shallowclouds force-pushed the fix/responses-api-max-output-tokens branch from a41e334 to e551c58 Compare October 22, 2025 09:44
@etraut-openai
Copy link
Copy Markdown
Collaborator

Thanks for the contribution, and apologies for the slow response. We're trying to catch up on our backlog of PR reviews.

I think the PR has gone stale. Could you fix the CI failures?

@etraut-openai etraut-openai added the needs-response Additional information is requested label Nov 4, 2025
@etraut-openai
Copy link
Copy Markdown
Collaborator

Closing because there hasn't been a response from contributor.

@etraut-openai etraut-openai removed the needs-response Additional information is requested label Nov 9, 2025
@github-actions github-actions bot locked and limited conversation to collaborators Nov 9, 2025
@openai openai unlocked this conversation Nov 9, 2025
youta7 added a commit to youta7/ta-codex that referenced this pull request Nov 10, 2025
@shallowclouds
Copy link
Copy Markdown
Contributor Author

Closing because there hasn't been a response from contributor.

Apologies for the delayed response. Could we please reopen this PR (#4139) so I can fix the CI failures, update the code, and request a re-review?

@etraut-openai etraut-openai reopened this Nov 20, 2025
@etraut-openai
Copy link
Copy Markdown
Collaborator

@shallowclouds, no problem. Reopened.

@etraut-openai
Copy link
Copy Markdown
Collaborator

@shallowclouds, looks like there are compiler errors that need to be fixed.

@etraut-openai etraut-openai added the needs-response Additional information is requested label Nov 20, 2025
@shallowclouds shallowclouds force-pushed the fix/responses-api-max-output-tokens branch 2 times, most recently from 1ad3f47 to 00127e6 Compare November 21, 2025 03:17
@shallowclouds
Copy link
Copy Markdown
Contributor Author

@shallowclouds, looks like there are compiler errors that need to be fixed.

thanks a lot, I've fixed the compiling issues, could we run the ci again?

@etraut-openai etraut-openai removed the needs-response Additional information is requested label Nov 21, 2025
@etraut-openai
Copy link
Copy Markdown
Collaborator

I reran CI, and tests are failing. Can you take a look?

@etraut-openai etraut-openai added needs-response Additional information is requested and removed needs-response Additional information is requested labels Nov 21, 2025
@shallowclouds shallowclouds force-pushed the fix/responses-api-max-output-tokens branch 2 times, most recently from 7d7a383 to ee75cd4 Compare November 21, 2025 05:33
@shallowclouds
Copy link
Copy Markdown
Contributor Author

I reran CI, and tests are failing. Can you take a look?

Got it. I've fixed the tests error in the CI.

@etraut-openai
Copy link
Copy Markdown
Collaborator

@shallowclouds, tests are still failing.

Request param `max_output_tokens` is documented in
https://github.com/openai/codex/blob/main/docs/config.md,
but nowhere uses the item, this commit read it from
config for GPT responses API.

Change-Id: I7e33ea36669249a3a75685e8008df4277ca9bdfb
Signed-off-by: Yorling <shallowcloud@yeah.net>
@shallowclouds shallowclouds force-pushed the fix/responses-api-max-output-tokens branch from ee75cd4 to 67eafeb Compare November 21, 2025 06:17
@shallowclouds
Copy link
Copy Markdown
Contributor Author

@shallowclouds, tests are still failing.

Sorry for my mistakes, Could we run again please.

(By the way, could I run these tests locally? I've tried but got a timeout error instead. )

$ RUST_BACKTRACE=1 cargo test compact_resume_and_fork_preserve_model_history_view

running 1 test
test suite::compact_resume_fork::compact_resume_and_fork_preserve_model_history_view ... FAILED

failures:

---- suite::compact_resume_fork::compact_resume_and_fork_preserve_model_history_view stdout ----

thread 'suite::compact_resume_fork::compact_resume_and_fork_preserve_model_history_view' panicked at /data00/home/mymymy/project/codex-fix/codex-rs/core/tests/common/lib.rs:159:14:
timeout waiting for event: Elapsed(())

@etraut-openai
Copy link
Copy Markdown
Collaborator

I'd expect that you would be able to run these locally. Maybe your system is slower than the CI servers and dev machines that we're using. There are time limits in the tests, and it looks like you're hitting a timeout.

@etraut-openai etraut-openai merged commit c9e149f into openai:main Nov 21, 2025
25 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Nov 21, 2025
@shallowclouds shallowclouds deleted the fix/responses-api-max-output-tokens branch November 21, 2025 06:47
@etraut-openai
Copy link
Copy Markdown
Collaborator

@shallowclouds, we discovered that this change broke our mainstream use cases with our flagship models. Our unit tests didn't catch this regression because they don't actually run against the cloud models; that would make the CI tests slow and unreliable. We decided to revert this PR for now. It will need to be added back in a way that doesn't cause a regression.

@etraut-openai
Copy link
Copy Markdown
Collaborator

We've opted to drop support for max_output_tokens: #7100

@shallowclouds shallowclouds restored the fix/responses-api-max-output-tokens branch November 24, 2025 03:01
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants