-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local LLM support via Ollama #53
Conversation
Signed-off-by: Sidney Nimako <snibo13@gmail.com>
Signed-off-by: Sidney Nimako <snibo13@gmail.com>
Signed-off-by: Sidney Nimako <snibo13@gmail.com>
Signed-off-by: Sidney Nimako <snibo13@gmail.com>
Signed-off-by: Sidney Nimako <snibo13@gmail.com>
Signed-off-by: Sidney Nimako <snibo13@gmail.com>
this looks great @snibo13! I'll give it a test and let you know if it's ready. are you running ollama via wsl on windows? |
Yeah running with WSL |
made a couple changes, going to use this for a bit to see if there's anything else to address, but it's looking good!
pile-ai.mov |
cool thing. how to? could you offer a tutorial for this? thanks. |
Any traction on this? |
Ollama finally produced libraries for implementing their API functions within apps, I hope this is useful. btw something bout the dm license is causing errors. "Cannot find module 'dmg-license" is the error message |
I know everyone might be busy with personal matters, but could you please check if there has been any progress on this? |
local AI support with ollama shipped in v0.9.8. |
I set up a system using Llama 2 via ollama and modified the settings page to allow you to change between Ollama and OpenAI and select the model type you want to use if you're running locally. I'm running on a Windows PC so I haven't been able to validate it on MacOS but I imagine everything should work.
One note is it isn't setup to work with LLMindex as it stands. There is an API endpoint for ollama to generate embeddings for it might be possible to use a different vector database system and explicitly compute the embeddings.