fix: read max_output_tokens param from config#4139
Conversation
|
All contributors have signed the CLA ✍️ ✅ |
There was a problem hiding this comment.
Codex Review: Here are some suggestions.
About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".
|
I have read the CLA Document and I hereby sign the CLA |
7864a8e to
62d7b0e
Compare
62d7b0e to
a41e334
Compare
a41e334 to
e551c58
Compare
|
Thanks for the contribution, and apologies for the slow response. We're trying to catch up on our backlog of PR reviews. I think the PR has gone stale. Could you fix the CI failures? |
|
Closing because there hasn't been a response from contributor. |
Apologies for the delayed response. Could we please reopen this PR (#4139) so I can fix the CI failures, update the code, and request a re-review? |
|
@shallowclouds, no problem. Reopened. |
|
@shallowclouds, looks like there are compiler errors that need to be fixed. |
1ad3f47 to
00127e6
Compare
thanks a lot, I've fixed the compiling issues, could we run the ci again? |
|
I reran CI, and tests are failing. Can you take a look? |
7d7a383 to
ee75cd4
Compare
Got it. I've fixed the tests error in the CI. |
|
@shallowclouds, tests are still failing. |
Request param `max_output_tokens` is documented in https://github.com/openai/codex/blob/main/docs/config.md, but nowhere uses the item, this commit read it from config for GPT responses API. Change-Id: I7e33ea36669249a3a75685e8008df4277ca9bdfb Signed-off-by: Yorling <shallowcloud@yeah.net>
ee75cd4 to
67eafeb
Compare
Sorry for my mistakes, Could we run again please. (By the way, could I run these tests locally? I've tried but got a timeout error instead. ) |
|
I'd expect that you would be able to run these locally. Maybe your system is slower than the CI servers and dev machines that we're using. There are time limits in the tests, and it looks like you're hitting a timeout. |
|
@shallowclouds, we discovered that this change broke our mainstream use cases with our flagship models. Our unit tests didn't catch this regression because they don't actually run against the cloud models; that would make the CI tests slow and unreliable. We decided to revert this PR for now. It will need to be added back in a way that doesn't cause a regression. |
|
We've opted to drop support for |
Request param
max_output_tokensis documented inhttps://github.com/openai/codex/blob/main/docs/config.md,but nowhere uses the item in config, this commit read it from config for GPT responses API.
see #4138 for issue report.