New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helpers (or best practices) for non-streaming API response? #60
Comments
I am also checking in here because of this. So 2 things I found
To your question regarding JSON closing tags, I implemented a month ago a streaming plugin aware chatbot. What you are looking for is basically a parser that whenever you see tokens that technically matches json in its correct syntax, you buffer them, until you have a full valid JSON, and than you yield that whole thing. To make it easier, I used prompt engineering to wrap any json into a |START| and |END| tag. This way I can more easily start buffering So in the end you will have a stream that looks like Of Anyways that is a completely different implementation than open ai suggests, as the function calls from them cannot come along with text from the AI. But still I thought worth sharing |
So I got it working with a custom AIStream, I basically just copied their OpenAIStream and adapted it slightly. The functions calls are now prompted out. Note, of course that would require further parsing
|
Thanks! It looks like streaming function calls is relatively easy, but it's still challenging to parse partial arguments. |
I looked at the results and they do it quite clever. I like the way that it makes sure that one chunk is always a valid json. So you could put into the parser just some annotation while the function call is happening, and then merging the stuff. i got this out of the stream
So what you need to do is just merging the stuff together. In the arguments example it would be the logic of just concatinating the string |
Here is an updated version i just monkey coded together, might help
Unfortunately worth nothing, this parser has side effects, as it holds a buffer outside. There are better solutions. |
I just put up a PR that allows for streaming function responses to be streamed back to clients (who then can parse the JSON once the response is finished). #154 |
Thanks for creating this great tool! I started to create something similar last night, and was glad to see this today.
I'm using the new Function Calling API to generate improvement recommendations that are then highlighted in a block of text. (Similar to Grammarly.) I'm not doing a full agent-style loop or anything, just using the function to ensure the results are in the right schema.
With JSON's required closing tags, I can't parse or render a partial (streamed) response. (I'm not even sure if FC can be streamed.) Is there a best practice for how to return these non-streaming results back to the client in a way that is compatible with
useCompletion
anduseChat
? I'm currently using the following and it seems to work well enough.Is there an equivalent of
OpenAIStream
andStreamingTextResponse
to similarly abstract non-streaming responses? Is that somewhere you'd be open to a contribution?The text was updated successfully, but these errors were encountered: