Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please add support for LiteLLM to this project #71

Closed
Greatz08 opened this issue Apr 1, 2024 · 4 comments
Closed

Please add support for LiteLLM to this project #71

Greatz08 opened this issue Apr 1, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@Greatz08
Copy link

Greatz08 commented Apr 1, 2024

This project is pretty great BUT we need more options to use different LLM's.You don't have to worry about creating a solution which supports 100+ LLM easily as LiteLLM is another foss project which is capable of doing this task for you.
Project LiteLLM link - https://github.com/BerriAI/litellm
You can study there project and see how it can be implemented in this project like you implemented support for ollama I believe similarly you can do it for LiteLLM which will be big win for the project as many will be easily able to use many more LLM easily which everyone wants and project will require 3 major parameters from user like base url,model name,api key that's all and with open ai api general structure it can query and give back result for the query.Many big projects have started adding support for this project in there project to make things advanced in easier way so study it and after that if you have any query you can ask them they are pretty responsive plus if u want to know more about my personal experience of using it with other great projects like flowise then I can tell you that too in detail.

@apocas
Copy link
Owner

apocas commented Apr 1, 2024

We already support any LLM that Ollama supports, you don't need any code to add a LLM. You are able to do it via browser in this case :)
Besides Ollama for local LLMs, any public LLM supported by Llamaindex is also easily supported.

So I believe we are well served regarding LLMs for now :)

@apocas apocas closed this as completed Apr 1, 2024
@apocas
Copy link
Owner

apocas commented Apr 1, 2024

https://docs.llamaindex.ai/en/stable/examples/llm/litellm/ its veryyy easy to add support for litellm in RestAI.
I will add it in the next release :)

@apocas apocas reopened this Apr 1, 2024
@apocas apocas added the enhancement New feature or request label Apr 1, 2024
@apocas
Copy link
Owner

apocas commented Apr 1, 2024

Et voila, you can now use LiteLLM :)
If you use an user with admin privileges just add a new LLM using the "LiteLLM" class and specify the parameters you want for this LLM.

image

In master, published in the next release.

@apocas apocas closed this as completed Apr 1, 2024
@Greatz08
Copy link
Author

Greatz08 commented Apr 1, 2024

@apocas thanks will test later for sure :-)).Problem with ollama only was that in ollama you have to download and then run the heavy model on your system and then you can use its base url in other project like this one which will act as llm source to generate response but in case of litellm we can use any kind of model be it close source like open ai,claud or Gemini or open source model running with public api like groq which is providing mixtral,Gemma,llama or running locally by ourself with ollama so it solves the issue of running multiple types of llm with single api structure which is very convenient and easy to use and that's why i wanted that in this project

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants