Add OpenAIResponseClient.CreateResponse{Streaming}Async overloads that take RequestOptions #687
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Today, if you're using the convenience types and you just want to set a user-agent header on a request (you're handed the instance and thus can't configure it at creation), the only supported option I see is to abandon the convenience methods and adopt the protocol methods, which means formatting all the input JSON and parsing all the response SSE / JSON. That's a big pill to swallow. Based on need, could we add a few convenience overloads that just take a RequestOptions instead of a CancellationToken? It results in minimal additional surface area / almost no additional duplicated code, as it's just taking a RequestOptions instead of taking a CancellationToken and calling ToRequestOptions on it. I've demonstrated in this PR with CreateResponseAsync and CreateResponseStreamingAsync, which are the two methods I currently care about.