Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please add LiteLLM to this project #339

Open
Greatz08 opened this issue Apr 4, 2024 · 7 comments
Open

Please add LiteLLM to this project #339

Greatz08 opened this issue Apr 4, 2024 · 7 comments

Comments

@Greatz08
Copy link

Greatz08 commented Apr 4, 2024

This project is pretty great BUT we need more options to use different LLM's.You don't have to worry about creating a solution which supports 100+ LLM easily as LiteLLM is another foss project which is capable of doing this task for you.

Project LiteLLM link - https://github.com/BerriAI/litellm

Adding LiteLLM will be big win for the project as many will be easily able to use many more LLM easily which everyone wants and project will require 3 major parameters from user like base url,model name,api key that's all and with open ai api general structure it can query and give back result for the query.Many big projects have started adding support for this project in there project to make things advanced in easier way so study it and after that if you have any query you can ask them they are pretty responsive plus if u want to know more about my personal experience of using it with other great projects like flowise then I can tell you that too in detail.

@RohitX0X
Copy link

RohitX0X commented Apr 4, 2024

Sounds great

@phalexo
Copy link

phalexo commented Apr 4, 2024

Ollama already provides an OpenAI compatible API. Why bother with litellm?

@Greatz08
Copy link
Author

Greatz08 commented Apr 5, 2024

@phalexo Remember ollama use litellm and not litellm use ollama. Problem with ollama only was that in ollama you have to download and then run the heavy model on your system and then you can use its base url in other project like this one which will act as llm source to generate response but in case of litellm we can use any kind of model be it close source like open ai,claud or Gemini or open source model running with public api like groq which is providing mixtral,Gemma,llama or running locally by ourself with ollama so it solves the issue of running multiple types of llm with single api structure which is very convenient and easy to use and that's why its needed in this project.

@yf007
Copy link

yf007 commented Apr 8, 2024

We need this LiteLLM.

@cwallace
Copy link

This would really take Devika to the next level. Unlocking so many available models would be a huge gain in capability.

@lehcode
Copy link

lehcode commented Apr 30, 2024

I support. The lack of LiteLLM support is a big minus.

@OSH212
Copy link

OSH212 commented May 15, 2024

check #563

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants