Replies: 1 comment
-
Alright, running this again
So I'm guessing it has to do with the API I'm using for my model. I'm going to keep this open and see if I can get any clarity on the API side what might need to be changed. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey everyone, I'm trying to use
ai
without using the frontend code,useChat
. The examples that follows are Node.js/backend only. Also, I'm using anopenai
compatible API, but the model is not an OpenAI model. Not sure if that matters or not. And "compatible" might be the problem.What I want to do: Stream a full response that includes tool calls and have the text + tool calls correctly handled. When I execute the sample code below, the text stream is correctly handled as expected, but then when the LLM goes to call my function, the arguments themselves are streamed to the client. Each argument represents a few bytes and so each one fails the schema validation.
Output:
The LLM is trying to call the
getWeather
function, and providing the correct arguments, but because each chunk only has a partial chunk of the entire schema, they all fail the schema validation. I've tried various options in thestreamText
but can't get it work as expected.I'm looking for the obvious thing I'm missing - I feel like streaming the response and expecting tool calls within it is table stakes for tool functionality, but my minimal example isn't working.
This does work as expected if I use
generateText
instead.Sample Code
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions