-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Description
Please read this first
- I have read the custom model provider docs, including the Common issues section.
- I searched for related issues and did not find a matching report.
Describe the question
LitellmModel sends an OpenAI-style dict-valued reasoning_effort to non-OpenAI providers when model_settings.reasoning.summary is set.
In src/agents/extensions/models/litellm_model.py, the current logic builds:
{"effort": "low", "summary": "auto"}whenever reasoning.summary is present.
That shape is fine for OpenAI-style reasoning controls, but it leaks into all LiteLLM providers. For Anthropic via LiteLLM, this leads to broken behavior downstream: LiteLLM's Anthropic transformer only handles string-valued reasoning_effort, so the value gets ignored instead of becoming thinking.budget_tokens.
I originally filed this in LiteLLM, but after tracing both sides, the primary ownership seems to be here because the Agents SDK is the component deciding to send the OpenAI-only dict shape to non-OpenAI providers.
Related LiteLLM issue for context: BerriAI/litellm#24599
Debug information
- Agents SDK version:
0.9.3 - Python version:
3.12.4 - Also verified the same code path still exists on
mainat8fdb45da
Repro steps
Minimal source-level repro:
- Configure a LiteLLM-backed non-OpenAI model, for example Anthropic.
- Set
model_settings=ModelSettings(reasoning=Reasoning(effort="low", summary="auto")). - Use
LitellmModel. - Observe that
LitellmModelconstructs dict-valuedreasoning_effortfor the provider-agnostic LiteLLM call:
# src/agents/extensions/models/litellm_model.py
if model_settings.reasoning.summary is not None:
reasoning_effort = {
"effort": model_settings.reasoning.effort,
"summary": model_settings.reasoning.summary,
}Current upstream location:
src/agents/extensions/models/litellm_model.py:446-455onmain
Relevant downstream effect in LiteLLM Anthropic for context:
- LiteLLM Anthropic only maps
reasoning_effortwhen it is a string inlitellm/llms/anthropic/chat/transformation.py
elif param == "reasoning_effort" and isinstance(value, str):
optional_params["thinking"] = AnthropicConfig._map_reasoning_effort(...)So the dict shape emitted here is not portable across providers.
Expected behavior
One of these should happen:
LitellmModelshould only send dict-valuedreasoning_effortfor providers/models that explicitly support that OpenAI-style shape.- For non-OpenAI LiteLLM providers,
LitellmModelshould degrade gracefully by sending only the string effort and dropping unsupportedsummary. - If neither is possible, the SDK should raise a clear validation error when
reasoning.summaryis set for a provider path that cannot support it.
The current behavior silently creates a cross-provider mismatch.