Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request]: Ollama Support #17

Open
ericrallen opened this issue Aug 29, 2023 · 3 comments
Open

[Request]: Ollama Support #17

ericrallen opened this issue Aug 29, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@ericrallen
Copy link
Member

Is your feature request related to a problem? Please describe.

Add support for ollama models.

Describe the solution you'd like

If ollama is running, populate the models dropdown with available local models.

If an ollama model is selected, submit the request to the ollama model via the completion endpoint.

Additional context

We'll need to add a new ollama service with a generic model definition and adapter configuration.

We'll also need a formatting utility similar to the formatChat utility for the existing openai service.

@ericrallen ericrallen added the enhancement New feature or request label Aug 29, 2023
@ishaan-jaff
Copy link

Hi @ericrallen I’m the maintainer of LiteLLM - we allow you to create a proxy server to call 100+ LLMs, and I think it can solve your problem (I'd love your feedback if it does not)

Try it here: https://docs.litellm.ai/docs/proxy_server

Using LiteLLM Proxy Server

import openai
openai.api_base = "http://0.0.0.0:8000/" # proxy url
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))

Creating a proxy server

Ollama models

$ litellm --model ollama/llama2 --api_base http://localhost:11434

Hugging Face Models

$ export HUGGINGFACE_API_KEY=my-api-key #[OPTIONAL]
$ litellm --model claude-instant-1

Anthropic

$ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1

Palm

$ export PALM_API_KEY=my-palm-key
$ litellm --model palm/chat-bison

@ericrallen
Copy link
Member Author

Hey there, @ishaan-jaff!

While I think LiteLLM introduces some interesting functionality, I can't really see how it would be practical to integrate with this plugin that runs inside of an Electron app, but I might be missing how it could be easily integrated.

@ericrallen
Copy link
Member Author

Would love to get jmorganca/ollama #751 merged in to make it easier for Obsidian plugins to find Ollama hosts on the local network without needing to manually enter IP addresses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants