-
Notifications
You must be signed in to change notification settings - Fork 954
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ollama support #167
Comments
This is something I'd be interested to see. Perhaps I could help with its implementation for use. Heck, I'd be interested to see Ollama and Qdrant approach as that is along the lines of what I'm using now with LangChain. Interesting project, and I'm open to helping out with such features (depending on the learning curve, I guess) |
I totally agree we should do this. Below is an except from the docs here which are using ChromaDB + "Other LLM": https://vanna.ai/docs/postgres-other-llm-chromadb.html So we just need someone to implement these fuctions for Ollama. class MyCustomLLM(VannaBase):
def __init__(self, config=None):
pass
def generate_plotly_code(self, question: str = None, sql: str = None, df_metadata: str = None, **kwargs) -> str:
# Implement here
def generate_question(self, sql: str, **kwargs) -> str:
# Implement here
def get_followup_questions_prompt(self, question: str, question_sql_list: list, ddl_list: list, doc_list: list, **kwargs):
# Implement here
def get_sql_prompt(self, question: str, question_sql_list: list, ddl_list: list, doc_list: list, **kwargs):
# Implement here
def submit_prompt(self, prompt, **kwargs) -> str:
# Implement here
class MyVanna(ChromaDB_VectorStore, MyCustomLLM):
def __init__(self, config=None):
ChromaDB_VectorStore.__init__(self, config=config)
MyCustomLLM.__init__(self, config=config)
vn = MyVanna() The implementation would look similar to this one for OpenAI: Or this one for Mistral: If you @seanmavley or @spiazzi would like to take a shot at this, that would be awesome! If not, I think I might be able to prioritize this after January 31. |
@zainhoda I'll take a shot at it, this week and see how it goes. Quickly, I see both examples you include above to be needing some form of API key. Is there any example so far using 100% everything local? |
@seanmavley awesome!
So far not yet but most of the code is related to prompt construction. Really the major change will be to this function, which would take the constructed prompt and send it to ollama def submit_prompt(self, prompt, **kwargs) -> str: I personally wasn't able to get ollama working on my M1 MacBook Pro. My fans started spinning up and then my computer overheated and died :-/ |
Noted. Thanks for the heads up. Will get to it as soon as possible. |
Hi all,
I started and run ollama with Mac mini M2.
It is intensive with llama2 and Mistral. I can suggest phi as LLM to test
it.
Let's check implementation with openai and maybe I can help
Il lun 22 gen 2024, 01:58 KhoPhi ***@***.***> ha scritto:
… Noted. Thanks for the heads up.
Will get to it as soon as possible.
—
Reply to this email directly, view it on GitHub
<#167 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACS54VTTWIAPKUR65WMILLYPW2TVAVCNFSM6AAAAABCD2SNC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBSHA2DANZTGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Based on https://github.com/jmorganca/ollama-python Could be replaced mistral client with Ollama one. Like from ollama import Client Maybe simple to replace |
I have something kind of working but I was again having issues running Ollama with my computer overheating. Would you mind helping me test this? Could you install and run from this branch: Install
Runfrom vanna.vannadb.vannadb_vector import VannaDB_VectorStore
from vanna.ollama import Ollama
class MyVanna(VannaDB_VectorStore, Ollama):
def __init__(self, config=None):
VannaDB_VectorStore.__init__(self, vanna_model='chinook', vanna_api_key=MY_VANNA_API_KEY, config=config)
Ollama.__init__(self, config=config)
vn = MyVanna(config={'model': 'mistral'})
vn.ask("What are the top 5 customers by sales?") |
On a shell run ollama run phi Then try with model phi and not mistral. This at.least to test with little model. |
@zainhoda Running the script above gives me this response in my terminal, using phi (the model I have at the moment)
I'm download llama2 to try with that model and see. |
@seanmavley thank you! this is pretty encouraging -- so it appears that it got the correct answer: SELECT c.CustomerId, c.FirstName, c.LastName, SUM(i.Total) AS TotalSales
FROM Customer c
INNER JOIN Invoice i ON c.CustomerId = i.CustomerId
GROUP BY c.CustomerId, c.FirstName, c.LastName
ORDER BY TotalSales DESC
LIMIT 5; However, after the answer it just kept continuing. This might be ok for now if we just parse out the first select statement that we find in the answer. Thanks again! Will continue to work on this |
I do not know if that's expected behavior. I'm trying to use
Yes, I learn the However, the llama2 model is rather ongoing long unending. Not sure what the reason may be |
@seanmavley well right now there's not really an "expected" behavior since we're still trying this out -- you're learning what the expectations are lol! have you tried llama2 with other prompts? how long does that take? |
cc @zainhoda
And using a custom vanna like this (inspired by: https://vanna.ai/docs/getting-started.html):
I get this:
Using Llama 2 |
Running the Ollama appears to work well (on my machine so far). The issue I had from above: #167 (comment) no longer exists. Since you started with the At the moment, I'm not clear on the guide provided here: https://github.com/vanna-ai/vanna/blob/main/CONTRIBUTING.md#do-this-before-you-submit-a-pr about submitting PRs |
I am very interested to see the Ollama support & following this issue. I see the above example is merged into major-refactor branch. I am going to test with lighter models provided by Ollama. |
With the release of Ollama 0.1.24, an OpenAI API implementation is available, so the code is quite clean (assuming Ollama is running on the standard 11434): from openai import OpenAI
ollamaclient = OpenAI(
base_url = 'http://localhost:11434/v1',
api_key='ollama', # required, but unused
)
class MyVanna(ChromaDB_VectorStore, OpenAI_Chat):
def __init__(self, config=None):
ChromaDB_VectorStore.__init__(self, config=config)
OpenAI_Chat.__init__(self, client=ollamaclient, config=config)
vn = MyVanna(config={'model': 'openhermes'}) |
I want to implement vanna with langchain ollama and do not want use any type of api keys or vanna models. Is it possible? |
Hi all, is not there ollama support ? This is quite common to have a local LLM. Whith Chroma and Ollama could be complete local (considering confidential information could be very important sometime)
The text was updated successfully, but these errors were encountered: