-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Google Gemini and Local LLM Support #308
Comments
@sumant1122 welcome to check out this issue: #277 |
Since we are currently working hard on improving end-to-end user experience. We may not at the moment have enough time to support other LLMs and do prompt engineering to other LLMs also. If you have thoughts on how to make the process easily scalable, we definitely welcome your feedback and contribution :) For customizing your LLM or Document Store, please also refer to the guide: https://docs.getwren.ai/installation/custom_llm Thank you |
Thank you. Will check out the docs and try to contribute. |
all, the ollama has been integrated in this branch, also you can use openai api compatible llm: related pr: #376 |
All, we now support using Ollama and OpenAI API-compatible LLMs now with the latest release: https://github.com/Canner/WrenAI/releases/tag/0.6.0 Setups on how to run Wren AI using custom LLMs: https://docs.getwren.ai/installation/custom_llm#running-wren-ai-with-your-custom-llm-or-document-store Currently, there is one obvious limitation for custom LLMs: you need to use the same provder(such as OpenAI, or Ollama) for LLM and embedding model. We'll fix that and release a new version soon. Stay tuned 🙂 I'll close this issue as completed now. |
I see that OpenAI is being used for LLM tasks, but not everyone has access to OpenAI paid account. Google Gemini support would be great. A function to connect to local LLM would be great too.
The text was updated successfully, but these errors were encountered: