Skip to content

[Bug]: errors when using database and config models #11623

@lucinvitae

Description

@lucinvitae

What happened?

Our team recently noticed a bug when switching from only sourcing models from the litellm configuration file to using database models. When setting STORE_MODEL_IN_DB='True' using the existing configured models from a config file (1) result in 400 HTTP Bad Request errors and (2) the models intermittently display and disappear in the admin UI. Multiple members of my team are able to confirm both issues.

It seems like litellm cannot handle both database and config models simultaneously.

To reproduce the issue:

  1. Add config file with some initial models (e.g. azure gpt-4.1)
  2. test to verify model requests succeed and models display in admin ui
  3. set STORE_MODEL_IN_DB='True'
  4. add a different model from the same LLM provider (e.g. azure gpt-4.1-mini)
  5. test to verify initial (e.g. gpt-4.1) model requests fail and admin ui glitches as mentioned above

Workaround:

  • duplicate model configuration in database

Relevant log output

Level: High
Timestamp: 17:25:40

Message: LLM API call failed: `litellm.BadRequestError: You passed in model=gpt-4.1. There is no 'model_name' with this string . Received Model Group=gpt-4.1
Available Model Group Fallbacks=None`

Are you a ML Ops Team?

Yes

What LiteLLM version are you on ?

v1.72.0

Twitter / LinkedIn details

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions