Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Context
I'm trying to build an LLM agent that operates primarily on the client side that users can interact with from a chat interface.
With OpenAI's new function-calling capabilities, it's up to the model to decide whether or not it will call a function (if functions are provided to it). If using an Edge Runtime route handler similar to the Vercel AI SDK example here, the route handler should be able to support streaming back a function call in addition to normal chat completion responses from the assistant role. Without this PR, Vercel's streaming utils simply return an empty stream if OpenAI responds with a function call.
Testing
I've only manually tested this change so far, but if maintainers would like to move forward with the approach, I can write a unit test as well.
To see how the streamed response chunks from OpenAI are broken up, Postman is a great tool.
Since the Vercel AI SDK does not yet support function-calling in its
useChat()
hook or any other method, I've got my own custom logic calling the Edge route handler for now that's passing functions in the request (See below). I also am using an updated version ofopenai-edge
(PR here) in order to use function calling in the Next Edge Runtime.Here's the rudimentary code I hacked together to test this. Note that the code below does not render the stream chunk by chunk on the client side yet (though that would probably be trivial to add).
The route handler code I tested is also very similar to the example given in the Vercel AI SDK docs. It looks something like this: