-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Too few models are supported #1297
Comments
Oh, I see gemini and claude in the doc now, my bad. Edited. |
what doc did you see that made it seem we don't support enough providers @2catycm ? |
Hi @2catycm, please feel free to add any specific models / providers you'd like support for, in this thread - #1294 Closing this issue in favor of tracking it on that thread. Similar to @ishaan-jaff, curious what made you feel we didn't support a lot of providers? |
@krrishdholakia |
what made you feel we didn't support a lot of providers? Because most chinese people can't not use other country model(don't have vpn), so we need 2nd 3rd 4th. |
There are relevant docs: |
Yes, some models are illegal in China because the data is stored in server outside China (for which the law maker might think it dangerous), and the algorithm is not put on records. Just like why Tiktok is now becoming illegal in America. With this context, a lack of support for legal LLMs in China would seem "too few" for some users. |
So maybe for the same reason these models on the list might not be legal in America or other countries, not sure about that |
But open source models would definitely not be a problem for law. |
hope for support for Ali QWen。 |
It seems Kimi and ChatGLM are now using similar format as openAI, which means with just a little work you can use them via LiteLLM. I have not tested Qwen but it seems plausible. |
qwen request format is quite different; |
Yeah After taking some look into it I noticed that too. Seems dashscope is a must. Will look into it but normally glm and Kimi work smoothly. |
Hey @dl942702882 @NeverOccurs why use litellm to proxy openai-compatible models? |
At first I used it to proxy ollama in order to use function calling. Then I tried to use it proxy some OpenAI compatible LLMs to see if I can achieve easier management of LLMs. Mostly it is because I'm new to this so just trying around. |
cause qwen is not openai-compatible, the chat request format is different. see here qwen api doc |
Langchain already has corresponding examples and hopes to access them as soon as possible. Here are some links I hope you find helpful. |
The Feature
Many famous LLM model are not included, while other similar github repo like https://github.com/songquanpeng/one-api and https://github.com/llmapi-io/llmapi-server supports them.
For example, this repo don't include:
and these models are of high interest to me.
Motivation, pitch
This Repo seems elegant and simple, which is good compared to others. But is doesn't support some models that are famous in some community.
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: