-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error Fetching Models #114
Comments
I think the most completeness of LLM backend is Azure OpenAI. |
So it's a complete waste of time to use this, unless I have Azure OpenAI? |
Hi @Webslug @hchen2020 I’m the maintainer of LiteLLM (abstraction to call 100+ LLMs)- we allow you to create a proxy server to call 100+ LLMs, and I think it can solve your problem (I'd love your feedback if it does not) Try it here: https://docs.litellm.ai/docs/proxy_server Using LiteLLM Proxy Serverimport openai
openai.api_base = "http://0.0.0.0:8000/" # proxy url
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}])) Creating a proxy serverOllama models $ litellm --model ollama/llama2 --api_base http://localhost:11434 Hugging Face Models $ export HUGGINGFACE_API_KEY=my-api-key #[OPTIONAL]
$ litellm --model claude-instant-1 Anthropic $ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1 Palm $ export PALM_API_KEY=my-palm-key
$ litellm --model palm/chat-bison |
Sorry I gave up on this as this was getting too confusing and I just recreated my own console discord AI bot in the end. Thanks for the help and for the project. |
Hi, I managed to build bot sharp and got the UI running. Can this load many of the hugging face models? I was hoping I could run local models off of my system. I seem to get this error and I could not figure out how to change the path to models. I don't want to use OpenAI.
The text was updated successfully, but these errors were encountered: