Skip to content

Conversation

Ratish1
Copy link
Contributor

@Ratish1 Ratish1 commented Sep 23, 2025

Description

Previously LiteLLMModel.structured_output rejected models that litellm.utils.supports_response_schema reported as unsupported. That caused false failures when the user configured a LiteLLM proxy, because the proxy may support response_format or negotiate support differently.

This PR:

  • Allows client_args["use_litellm_proxy"] (or use_proxy) to bypass the supports_response_schema() check.

  • Continues to enforce supports_response_schema() for non-proxy usage.

  • Calls litellm.acompletion(..., response_format=) and parses returned tool_calls content into the Pydantic model.

  • Adds a unit test verifying the proxy-bypass behavior.

Key Changes

src/strands/models/litellm.py

  • Update structured_output to skip the supports_response_schema guard when using a LiteLLM proxy.
  • Defensive check for presence of model_id.

tests/strands/models/test_litellm.py

  • Add test_structured_output_with_proxy_bypasses_support_check.

Related Issues

Closes #862

Documentation PR

N/A

Type of Change

Bug fix

Testing

How have you tested the change? Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli

  • I ran hatch run prepare
  • I ran hatch fmt --formatter locally.
  • I ran hatch fmt --linter and pre-commit run --all-files.
  • I ran unit tests locally:

pytest tests/strands/models/test_litellm.py::test_structured_output → passed

pytest tests/strands/models/test_litellm.py::test_structured_output_with_proxy_bypasses_support_check → passed

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works
  • I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@dbschmigelski
Copy link
Member

Hi, thanks for raising this. I think I want to go in a different direction though ( I'm working on this currently actually)

I believe we should never fail. We use the native response_format if we return True, else we switch to the tool approach we see in the Bedrock provider.

As for proxy usage, I will create an issue and a tracking issue for us regarding litellm. I think this is actually a bug in their side that the method does not propagate the proxy fields.

@Ratish1
Copy link
Contributor Author

Ratish1 commented Sep 24, 2025

Hi, thanks for raising this. I think I want to go in a different direction though ( I'm working on this currently actually)

I believe we should never fail. We use the native response_format if we return True, else we switch to the tool approach we see in the Bedrock provider.

As for proxy usage, I will create an issue and a tracking issue for us regarding litellm. I think this is actually a bug in their side that the method does not propagate the proxy fields.

Thanks for letting me know. Let me know if you need any help with anything else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] structured_output with LiteLLM Proxy raises ValueError: Model does not support response_format
2 participants