-
Notifications
You must be signed in to change notification settings - Fork 49
fix: watsonx and litellm parameter filtering #187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Merge ProtectionsYour pull request matches the following merge protections and will not be merged until they are valid. 🟢 Enforce conventional commitWonderful, this rule succeeded.Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/
|
|
|
||
| if len(unsupported_openai_params) > 0: | ||
| FancyLogger.get_logger().warning( | ||
| f"litellm will automatically drop the following openai keys that aren't supported by the current model/provider: {', '.join(unsupported_openai_params)}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to force LiteLLM to accept parameters that we should expose through our API as well?
Thinking of an analogy to the --force parameter from some linux commands.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can set drop_params=False in the call to the model. But I think this change is accomplishing what you are asking for with accept parameters that we should expose through our API.
Previously, we dropped all params that weren't "known" and "basic" openai parameters. Now, we let LiteLLM drop "known" but "unsupported" openai params. All other params get passed through transparently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And there are no false negatives - i.e. litellm filters out a parameter that the model understands but LiteLLM assumes it doesn't ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's possible; I searched the LiteLLM github and the only errors I could find with drop_params is that it is too permissive (ie keeps parameters around that it shouldn't). We can disable drop_params if you'd prefer to just pass through all parameters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok.. false positives are ok IMHO.
* fix: watsonx param filter * fix: litellm model options filtering and tests * fix: change conftest to skip instead of fail qual tests on github * fix: remove comment * fix: test defaults * test: fixes to litellm test * test:fix test defaults
Closes #151
Made a small change to conftest.py to make it skip tests instead of xfailing them.
Watsonx:
get_sample_paramsto filter out acceptable parameters; it doesn't list all of themLiteLLM:
drop_params=Trueto drop parameters passed to LiteLLM, ie LiteLLM does all the dropping internallyThe Watsonx changes were made because we were incorrectly filtering out
MAX_NEW_TOKENSeven though its supported.The LiteLLM changes were made for similar reasons. Also, we were filtering out all non-standard parameters. Prior to the change, I couldn't target an arbitrary
hosted_vllmapi endpoint; now I can by passing inapi_key,base_urlandapi_baseparameters.