-
Notifications
You must be signed in to change notification settings - Fork 49
Closed
Labels
PrioritybugSomething isn't workingSomething isn't workinggood first issueGood for newcomersGood for newcomers
Description
Based on a slack thread. All relevant info duplicated here.
I think litellm is dropping too many model_options parameters which is making it difficult to work with. I believe this is coming from our use of get_supported_openai_params which only shows the supported openai params. I believe it will reject params that are supported by the provider/endpoint but that are not openai params. It may also be the drop_params=True setting.
For example, when I try to target RITS with LiteLLM, and set the appropriate model opts, my request still fails because it drops these keys:
"api_key": rits_apikey, "extra_headers": {"RITS_API_KEY": rits_apikey}
I think there's also some improvements that could be made to the backend init:
- allow an api_key to be passed in; even though litellm handles these as model options, it's unintuitive compared to how the other backends operate
- I believe the default model_id should actually be using
ollama_chat
Metadata
Metadata
Assignees
Labels
PrioritybugSomething isn't workingSomething isn't workinggood first issueGood for newcomersGood for newcomers