-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
What happened?
Our team recently noticed a bug when switching from only sourcing models from the litellm configuration file to using database models. When setting STORE_MODEL_IN_DB='True' using the existing configured models from a config file (1) result in 400 HTTP Bad Request errors and (2) the models intermittently display and disappear in the admin UI. Multiple members of my team are able to confirm both issues.
It seems like litellm cannot handle both database and config models simultaneously.
To reproduce the issue:
- Add config file with some initial models (e.g. azure gpt-4.1)
- test to verify model requests succeed and models display in admin ui
- set STORE_MODEL_IN_DB='True'
- add a different model from the same LLM provider (e.g. azure gpt-4.1-mini)
- test to verify initial (e.g. gpt-4.1) model requests fail and admin ui glitches as mentioned above
Workaround:
- duplicate model configuration in database
Relevant log output
Level: High
Timestamp: 17:25:40
Message: LLM API call failed: `litellm.BadRequestError: You passed in model=gpt-4.1. There is no 'model_name' with this string . Received Model Group=gpt-4.1
Available Model Group Fallbacks=None`
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
v1.72.0
Twitter / LinkedIn details
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working