Auto-route /chat/completions to /responses for unsupported chat completion features#233
Merged
Erin McNulty (erin2722) merged 3 commits intoMay 11, 2026
Merged
Conversation
| Some(openai::OutputItemType::Message) => { | ||
| // Extract text content from message output items | ||
| // Extract text content from message output items. | ||
| // `phase` is a message-level field (not content-level); it is stored in |
Contributor
Author
There was a problem hiding this comment.
This was a bit of a seperate change that I found because of the additional payload tests. We were stripping the phase parameter from messages when going through the universal format, and while this doesn't cause an error, it can negatively impact the chat responses-- https://developers.openai.com/api/docs/guides/reasoning#phase-parameter
These changes stick it in provider_options for the universal round trip
Ken Jiang (knjiang)
approved these changes
May 11, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This fix addresses an issue where the
/chat/completionsendpoint doesn't support reasoning with tool calls, as part of OpenAI's push to get every to migrate to the/responsesendpoint. Lingua was passing these requests straight through to the configured/chat/completionsendpoint and we were encountering a failure because of this.Like we were chatting about in standup last week, there are 2 potential resolutions to this:
/responsesbased on additional specific format detection logic in the router.I've gone with the second option here-- from my reading, I'd expect us to run into this flavor of problem again as openai pushes everyone to responses. Its not too janky :), and I think will come in handy in the future too.