-
-
Notifications
You must be signed in to change notification settings - Fork 7.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama streaming output is garbled #1088
Comments
Interesting... I suspect there's an error with how we're parsing the stream. Going to add this to the top of my bugs list cause this is a nasty one for sure. |
@mckaywrigley do you have any updates on this?, it is still happening for me |
Hello, I am having the same issue here: System: Ubuntu 22.04, latest version of ollama and latest pull of chatbot-ui as of 4 pm EDT Running prompt in ollama cli I get a different response than in the chatbot-ui. Attached are text files for comparison of python code output. Hopefully this is a simple fix! |
Using local LLMs with Ollama produces outputs with words that are occasionally cut off or garbled. Using Ollama server outside the Chatbot-UI does not exhibit this issue.
I was able to fix this by adding a line to the code to disable the streaming.
const response = await fetchChatResponse( process.env.NEXT_PUBLIC_OLLAMA_URL + "/api/chat", { model: "mistral",//chatSettings.model, messages: formattedMessages, stream: false, options: { temperature: payload.chatSettings.temperature } }, false, newAbortController, setIsGenerating, setChatMessages )
Here is the output without streaming. It would be nice to be able to stream though.
The text was updated successfully, but these errors were encountered: