-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: Unable to decode chunks from my OpenAI server #1475
Comments
PS: I test the same endpoint with Ollama's OpenWebUI, works without problems:
That is to say: You see the text appear and be exactly what was streamed in Open WebUI |
What connector are you specifically using? |
Yes, |
@odrobnik Ah, I think I see what is going on here. Your intermediate chunks do not contain |
Who is the provider behind this connector you are connecting with? They are mostly OpenAI compatible, but not 1:1 exactly |
@timothycarambat it's my own provider. I am working on an agent framework. I'll try to add the |
@timothycarambat I think you have one more problem here. When passing the option to include usage information you get a chunk like this:
There will be an empty Anyway, I'm adding NULL for when there is no finish reason, and I saw the text begin to appear but then it got replaced by this: ![]() |
PS: And when you got into this state, and try to send again, then there's some sort of endless-loop where the user message and this error appear, disappear, appear, disappear and so on ad infinitum. A parsing error shouldn't leave the app in an unusable state. |
ChatGPT found your issue: you always access choices[0] which is of course bad style as it leads to it suggests this change:
|
With that patch, if you wrap the entire function in the |
sorry, don't get hung up over ChatGPT's attempt. My point was that an empty array of Choices is a valid scenario which needs to be handled or in the least ignored without putting the app into an unusable state. |
the problem is that encountering |
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
I am working on my own OpenAI-compatible local server, for now I am decoding the chunks from OpenAI and re-encode them. That changes the order of fields somewhat, but otherwise the JSON is identical. AnythingLLM is unable to properly decode the actual message. It shows an empty message
Are there known steps to reproduce?
These are the streamed lines that it should be able to decode:
This is how the steamed lines from OpenAI look like, you see that the order of json fields is different. But your decoder should be robust enough to not care about that.
The message should appear as "Hello! How can I assist you today?", but you see only empty messages:
The text was updated successfully, but these errors were encountered: