Skip to content

Conversation

@jakelorocco
Copy link
Contributor

@jakelorocco jakelorocco commented Oct 8, 2025

Closes #151
Made a small change to conftest.py to make it skip tests instead of xfailing them.

Watsonx:

  • no longer use get_sample_params to filter out acceptable parameters; it doesn't list all of them

LiteLLM:

  • default for ollama now uses "ollama_chat/" which is the chat endpoint
  • changed parameter filtering
    • we now only utilize drop_params=True to drop parameters passed to LiteLLM, ie LiteLLM does all the dropping internally
    • add provider specific remapping capabilities; this is necessary when LiteLLM doesn't recognize OpenAI compatible parameters for a provider
    • add logging to see which keys were unknown but passed to LiteLLM and which OpenAI compatible keys LiteLLM believes it will drop for this call (there's false positives here)
  • added new tests

The Watsonx changes were made because we were incorrectly filtering out MAX_NEW_TOKENS even though its supported.

The LiteLLM changes were made for similar reasons. Also, we were filtering out all non-standard parameters. Prior to the change, I couldn't target an arbitrary hosted_vllm api endpoint; now I can by passing in api_key, base_url and api_base parameters.

@mergify
Copy link

mergify bot commented Oct 8, 2025

Merge Protections

Your pull request matches the following merge protections and will not be merged until they are valid.

🟢 Enforce conventional commit

Wonderful, this rule succeeded.

Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/

  • title ~= ^(fix|feat|docs|style|refactor|perf|test|build|ci|chore|revert|release)(?:\(.+\))?:

@jakelorocco jakelorocco marked this pull request as ready for review October 15, 2025 23:40

if len(unsupported_openai_params) > 0:
FancyLogger.get_logger().warning(
f"litellm will automatically drop the following openai keys that aren't supported by the current model/provider: {', '.join(unsupported_openai_params)}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to force LiteLLM to accept parameters that we should expose through our API as well?

Thinking of an analogy to the --force parameter from some linux commands.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can set drop_params=False in the call to the model. But I think this change is accomplishing what you are asking for with accept parameters that we should expose through our API.

Previously, we dropped all params that weren't "known" and "basic" openai parameters. Now, we let LiteLLM drop "known" but "unsupported" openai params. All other params get passed through transparently.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And there are no false negatives - i.e. litellm filters out a parameter that the model understands but LiteLLM assumes it doesn't ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible; I searched the LiteLLM github and the only errors I could find with drop_params is that it is too permissive (ie keeps parameters around that it shouldn't). We can disable drop_params if you'd prefer to just pass through all parameters.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok.. false positives are ok IMHO.

@jakelorocco jakelorocco merged commit 793844c into main Oct 20, 2025
7 of 10 checks passed
@jakelorocco jakelorocco deleted the jal/fix-watsonx-params branch October 20, 2025 13:05
tuliocoppola pushed a commit to tuliocoppola/mellea that referenced this pull request Nov 5, 2025
* fix: watsonx param filter

* fix: litellm model options filtering and tests

* fix: change conftest to skip instead of fail qual tests on github

* fix: remove comment

* fix: test defaults

* test: fixes to litellm test

* test:fix test defaults
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

litellm backend params / model options improvements

3 participants