-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Ollama chat tool calls are not parsed correctly #3333
Comments
I had the same issue but haven't got around to looking at it yet, but it's possible that #1526 addresses this. |
Merged the relevant PR in - should be live in the next litellm release - v |
@krrishdholakia The Regular responses are now parsing correctly, thanks. |
|
What happened?
ollama_chat/llama2
returns a tool call where the function name and arguments appear as a single object in thearguments
field, and name is incorrectly set to an empty string. This object should be parsed into the separatearguments
andname
field.Separately, when using
ollama_chat/llama2
withstream=True
the tool call JSON appears in thecontent
field but it should be parsed into thetool_calls
field.Here's the streamed tool call response using
gpt-3.5-turbo-1106
which is correctFixing this would allow more features of https://github.com/jackmpcollins/magentic to be used with ollama.
Related issue: jackmpcollins/magentic#194
Code to reproduces this
Relevant log output
No response
Twitter / LinkedIn details
@jackmpcollins / https://www.linkedin.com/in/jackmpcollins/
The text was updated successfully, but these errors were encountered: