-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support tool call round trips in streamUI #1895
Comments
This is crucial if you want to have a real conversation with an AI that can call tools. Right now I get the toolCall result back, and then I can't chat anymore. Here's my conversation:
How do you go around this behavior? |
This is possible using frontend tool calls, but tool calls for streamed responses is supposed to be coming soon. #1574 (comment) |
while a QoL update to automate round trips would be great for right now you can recursively pipe tool call responses back into your submitMessage() yourself
|
Also interested in this – one thought is maybe the Manually piping tool output back in via Open to ideas but something like this would provide a history of previously called tool invocations similar to tools: {
getWeather: {
description: 'Get Weather',
parameters: z.object({
city: z.string().describe('The city to get weather for'),
}),
generate: async function* ({ city }) {
const weatherForCity = getWeather(city)
yield (
{
output: weatherForCity,
widget: <Weather weather={weatherForCity} />,
}
)
},
}
} |
I'm also curious how to persist tool result messages using streamUI. Right now OpenAI requires that every tool call have a corresponding tool result message on the chat history array. Since tools with streamUI return a ReactNode, are we supposed to save that serialized node as a ToolResultPart and feed that back into the history? |
I've made a support request to Vercel for this issue as well. Was very surprised there was no existing support or some other suggested workaround/solution. |
Big +1 on this |
any updates from vercel? |
Would be nice - is there a technical reason why its not possible? Like because the streamUI wants to start returning data back to the user before its got a full response and needs to wait and process the full result in case its needs to call another tool (thus defeating the purpose of streaming in the first place?). Just curious and would help developing knowing the path we need to take one way or the other. |
in the latest version, they added support for text stream but not stream UI https://vercel.com/blog/introducing-vercel-ai-sdk-3-2 . |
Did anyone find a good workaround for this yet? I also have some tool calling to get some data, then have generic rendering for that data, but would love to get some kind of summary or answer depending on the actual user question. |
I was able to get this working by creating a custom streamUI implementation powered by streamText here is the idea of how it generally works.
then in the caller you can do this
|
with this strategy the UI can properly display multiple round trip / parallel tools calls with text before and after. the end result looks pretty similar to what chatgpt can get you these days. |
a lot of the boilerplate code is not needed they are only there because i forked from streamUI with very minimal modification. i imagine the official implementation of this could be a lot cleaner with a better interface. |
Feature Description
Currently only supported in
generateText
: https://sdk.vercel.ai/docs/ai-sdk-core/tools-and-tool-calling#tool-roundtripsSupport in streamUI would be great. Currently I'm working around this by returning a hidden component at the completion of the tool call that makes a new user request asking the model to continue. This isn't great because I have to account for this workaround in other areas (restoring saved chats etc)
Use Case
Any tool call that doesn't require the user to confirm. For example, looking up information to use in a response.
Additional context
No response
The text was updated successfully, but these errors were encountered: