-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Fix docker image for multiple deployment Router #696
Labels
bug
Something isn't working
Comments
Support for routing between multiple deployments added to the openai-proxy endpoint: ![]() also added support for accepting the model list either via a config.yaml - Line 26 in 751e9f0
or from the model_list param in the request body: Line 114 in e118ce5
Remaining Work:
|
This is live now. How to deploy router
git clone https://github.com/BerriAI/litellm
cp ./router_config_template.yaml ./router_config.yaml
docker build -t litellm-proxy . --build-arg CONFIG_FILE=./router_config.yaml docker run --name litellm-proxy -e PORT=8000 -p 8000:8000 litellm-proxy Test
|
docs updated as well. @alabrashJr can you let me know if this solves your problem? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What happened?
Our openai-proxy docker image no longer supports multiple model deployments. This is what we're currently directing users too - https://docs.litellm.ai/docs/routing#handle-multiple-azure-deployments-via-openai-proxy-server.
Relevant log output
No response
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: