-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JSON Mode + Streaming + OpenAI API + Llama3 = never sends STOP, and a lot of whitespace after the JSON #4446
Comments
The following issues are related to whitespace in JSON: #2577, #2623, PR #3784, PR #3785. But it seems to me (though I'm unfamiliar with the Ollama codebase) that the junk issue that @odrobnik is seeing, where the model goes haywire, might be worth investigating on its own. Perhaps there's something else going wrong here. |
These PRs are not the correct solution, as the HINT: To fix this issue, look in this complex function (
|
What is the issue?
without JSON Mode the last few lines of the stream of Chunk objects is:
if I enable JSON Mode the last few lines of the stream look like this:
The JSON mode seems to be supported because the JSON begins correctly with
{
on the streamed version, but it fails to detect that it is done, goes into whitespace-junk-output-mode and then after some internal limit just stops generating, but never sends the finish_reason: "stop".So there are three bugs:
OS
macOS
GPU
Apple
CPU
Apple
Ollama version
0.1.32
The text was updated successfully, but these errors were encountered: