Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Function call with Ollama and LlamaIndex #1729

Open
sandangel opened this issue Dec 27, 2023 · 14 comments
Open

Function call with Ollama and LlamaIndex #1729

sandangel opened this issue Dec 27, 2023 · 14 comments

Comments

@sandangel
Copy link

Hi, I'm looking for a way to add function call to work with Ollama and LlamaIndex.

From my research we have format json in Ollama, so theoretically, there are 2 ways we can support function call:

  1. Enforce the LLM to output json following a schema, and we can call the function based on the json output.
  1. We can also add API in Ollama itself to support function call directly, similar to OpenAI.
  • I'm not sure how this will work, especially OpenAI is not open source. Do you think it's possible to implement the function call feature directly in Ollama?
    • I'm not sure will we need to have a specific model that support function call, and we can feed { role: "tool", content: "tool output" } into the LLM
    • Or it's simply the feature we can add at the API level.

Please let me know what do you guys think and what should be the right approach for this issue going forward.

@xprnio
Copy link

xprnio commented Dec 27, 2023

From personal experience, enforcing the schema is somewhat hit-or-miss, especially depending on the complexity of the schema. I've gotten the best results with both being highly explicit in describing the schema (explaining each property in detail, specifying which properties are required), instructing it to only follow the schema (eg. "only include properties defined in the schema"), and giving some examples.

For my own project I'm currently using a different approach where I instead defined a custom "line-based protocol" for it to use which allows for both "sending messages" as well as "running commands" which not only reduces the overall response size (since JSON is quite verbose and thus increases the number of tokens per response quite a lot), but also enables my application to make use of streaming as well. The specifics of the protocol are somewhat specific to my application, but the general gist of it is this:

Every response line is either a message or a command.
Empty lines are skipped from processing

A response line is processed as a command if it is prefixed with `{command}:`.
Calling the `a` (action block) command takes in an action and parameters
Calling the `d` (data) command takes in a JSON object to be passed into the current action block
Calling the `e` (end) command ends the current action block
a: insert tasks
d: { "name": "Task name", "completed": false }
e:

Actions can also take in multiple parameters, for example to update a collection we can do
a: update tasks { "name": "Task name" }
d: { "completed": true }
e:

Response lines which are not prefixed with a command are processed as regular messages

My application explains the protocol, the various actions available, and the collections to the model in the system prompt, and by giving some examples for each of them it does do it's job quite well (at least the Mistral and Mixtral models, haven't tested others yet)

@sandangel
Copy link
Author

Hi @xprnio ,
Thanks a lot for sharing the experience and the detailed write-up. I wonder which schema do you use? Is it following OpenAI function call schema, or is it a custom schema we define ourselves?

@xprnio
Copy link

xprnio commented Dec 28, 2023

@sandangel
You need to define your own schema, which means that the world is your oyster in that regard. Make the schema as complex or as simple as you want, explain it however you want, etc.

For more of how I used it, have a look at this gist. It's quite big though (in terms of tokens) and mainly focuses on explaining it more in natural language than code, but does also incorporate quite a lot of examples to help the LLM understand.

I've also heard that another good way of describing JSON is to use TypeScript (haven't tested, but I think this might be a pretty good approach as well).

@jukofyork
Copy link

jukofyork commented Jan 1, 2024

My application explains the protocol, the various actions available, and the collections to the model in the system prompt, and by giving some examples for each of them it does do it's job quite well (at least the Mistral and Mixtral models, haven't tested others yet)

I'll have to try using the Mistral and Mixtral models. I've been adapting the Eclipse IDE plug-in called "AI Assist" to work with the Ollama API instead of OpenAI API but so far I've found it excruciatingly hard to get any of the coding specific LLMs to use function calls:

  • The Deepseek models seem to have been actively fine tuned to refuse to run any functions even though they seem to understand what you are asking them to do!
  • The codelama models and their derivatives seem to have much more trouble understanding what you are asking them to do, but will everything call a function if you totally spell it out and ask them "please call function X", but otherwise they just won't use them.

I'll be interested to see what you are using to help prompt them into using functions in your code. I agree showing examples of how to call the functions is important. The most success I had was just adding the functions to the system prompt in OpenAI API format (with the parameter descriptions, which parameters are optional, etc) with some examples below of how to use them.

I also found trying to get chat/instruct fine-tuned models to call functions right at the start of their reply (because of the way AI Assist handles streaming and function calls) was near impossible. I've had so many hilarious chats along the lines of: "No!!! please use the function at the start of the message!" followed by them apologising before trying to calling the function again - doh.

Overall it's been a huge fail so far.

@technovangelist
Copy link
Contributor

Hi @sandangel , @xprnio , @jukofyork , thanks for contributing to this issue. For function calling, I have found the best result coming from doing a few things:

First include format: json. Then specify in the system prompt that the model needs to output json. This gets you most of the way there. What makes it perfect in most cases I have tried is to do a few shot prompt. This is easiest with the chat endpoint. So include your system prompt, then an example question, and then the example answer in your schema. repeat that 1 or 2 more times. That has worked well for me.

@xprnio
Copy link

xprnio commented Jan 2, 2024

You're right @technovangelist, the way I used to do it was by putting all of the examples into the system prompt instead of "simulating" the examples through the chat interface itself with pre-made messages showing the expected path.

@sampriti026
Copy link

sampriti026 commented Jan 9, 2024

@xprnio can you please share an example of your code? I wanted to build a bot that asks necessary questions, and when the requisite information is received, then it calls the api. (imagine a shopping bot). My first version is to have the llm ask user - if all the necessary information is furnished - and when the user responds with yes - the llm makes the api call.

@xprnio
Copy link

xprnio commented Jan 9, 2024

@sampriti026 what part of the code do you mean exactly? In all honesty, the application I've been using this approach in has been put "into the drawer" for a bit and isn't really that good in terms of quality. But I do plan on open-sourcing the project as soon as I get time to clean up the code a bit, however I guess there's nothing really stopping me from just throwing it all up here and getting to cleaning it up whenever I have the time to.

But yeah, let me know what exactly you want an example of, I'll try to get that project up here on Git some time this week, and I'll give you a ping with the appropriate part of it. For context, the project itself is written in Go, just so you know

@johndpope
Copy link

I read on twitter - one user was getting good mileage making 2 calls - rather than forcing chatgpt 3.5 to return json in addition to prompt - just get the results - then ask api to format result into a json response. was 100% hit rate.

@tolas92
Copy link

tolas92 commented Feb 11, 2024

for function calling you can try this model https://huggingface.co/TheBloke/gorilla-openfunctions-v1-GGUF and then format the prompt template as :

FROM ./gorilla-openfunctions-v1.Q4_K_M.gguf


TEMPLATE """

### User:
{{.Prompt }}
### System:
{{.System}}

### Response:
"""

SYSTEM """<<function>> functions = [
    {
        "name": "Uber Carpool",
        "api_name": "uber.ride",
        "description": "Find suitable ride for customers given the location, type of ride, and the a>
        "parameters":  [
            {"name": "loc", "description": "Location of the starting place of the Uber ride"},
            {"name": "type", "enum": ["plus", "comfort", "black"], "description": "Types of Uber rid>
            {"name": "time", "description": "The amount of time in minutes the customer is willing t>
        ]
    }
]\n
ASSISTANT:"""

PARAMETER stop "<|system|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
PARAMETER stop "</s>"


./ollama run gorilla_test "USER: <<question>>"Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes ""
uber.ride(USER="plus", LOC="94704", TIME=10)

and append "USER: <" before the user request.

@jerryan999
Copy link

how about this blog:https://www.lepton.ai/blog/structural-decoding-function-calling-for-all-open-llms?

@RachelShalom
Copy link

how about this blog:https://www.lepton.ai/blog/structural-decoding-function-calling-for-all-open-llms?

I get 404 for this url

@jerryan999
Copy link

jerryan999 commented Apr 14, 2024

@RachelShalom
sorry for that, here is the link:https://www.lepton.ai/blog/structural-decoding-function-calling-for-all-open-llms

@icetech233
Copy link

for function calling you can try this model https://huggingface.co/TheBloke/gorilla-openfunctions-v1-GGUF and then format the prompt template as :

FROM ./gorilla-openfunctions-v1.Q4_K_M.gguf


TEMPLATE """

### User:
{{.Prompt }}
### System:
{{.System}}

### Response:
"""

SYSTEM """<<function>> functions = [
    {
        "name": "Uber Carpool",
        "api_name": "uber.ride",
        "description": "Find suitable ride for customers given the location, type of ride, and the a>
        "parameters":  [
            {"name": "loc", "description": "Location of the starting place of the Uber ride"},
            {"name": "type", "enum": ["plus", "comfort", "black"], "description": "Types of Uber rid>
            {"name": "time", "description": "The amount of time in minutes the customer is willing t>
        ]
    }
]\n
ASSISTANT:"""

PARAMETER stop "<|system|>"
PARAMETER stop "<|user|>"
PARAMETER stop "<|assistant|>"
PARAMETER stop "</s>"
./ollama run gorilla_test "USER: <<question>>"Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes ""
uber.ride(USER="plus", LOC="94704", TIME=10)

and append "USER: <" before the user request.

i cant understand

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants