Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate Ollama #687

Open
aniketmaurya opened this issue Mar 10, 2024 · 5 comments
Open

Integrate Ollama #687

aniketmaurya opened this issue Mar 10, 2024 · 5 comments

Comments

@aniketmaurya
Copy link

aniketmaurya commented Mar 10, 2024

Is your feature request related to a problem? Please describe.
Ollama provides fast local LLM inference and would be great to integrate with Guidance.


PS: I would love to contribute in this.

@Warlord-K
Copy link

Ollama is already supported via LiteLLM, You can use it like so

from guidance import models, gen , select

llama2 = models.LiteLLMCompletion(
    model=f"ollama/llama2",
    api_base="http://localhost:11434"
)
# capture our selection under the name 'answer'
lm = llama2 + f"Do you want a joke or a poem? A {select(['joke', 'poem'], name='answer')}.\n"

# make a choice based on the model's previous selection
if lm["answer"] == "joke":
    lm += f"Here is a one-line joke about cats: " + gen('output', stop='\n')
else:
    lm += f"Here is a one-line poem about dogs: " + gen('output', stop='\n')

@zvxayr
Copy link

zvxayr commented Mar 24, 2024

I got this error
TypeError: LiteLLM.__init__() got an unexpected keyword argument 'api_base'

Is there a specific version I should roll back to make that work?

Edit: Apparently, the word api_base is nowhere to be found in the code base anymore.

@eliranwong
Copy link

May I ask if any updates. I would like to use ollama as a backend. Thanks.

@nurena24
Copy link

I would also like to use ollama as a backend. Is there work to build native support for ollama/llama3?

@rcarmo
Copy link

rcarmo commented May 26, 2024

OK, so how do we set api_base now?

EDIT: I looked at #648 and the rest of the code base, and the fact that the litellm tests seem to have been stubbed out makes me think that ollama/litellm support is not a priority here and that this is going down the happy path of mainstream hosted APIs being the only real test targets... Otherwise there would have been a fix based on #648 merged by now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants