Skip to content

LitellmModel sends dict-valued reasoning_effort to non-OpenAI providers when reasoning.summary is set #2778

@bonk1t

Description

@bonk1t

Please read this first

  • I have read the custom model provider docs, including the Common issues section.
  • I searched for related issues and did not find a matching report.

Describe the question

LitellmModel sends an OpenAI-style dict-valued reasoning_effort to non-OpenAI providers when model_settings.reasoning.summary is set.

In src/agents/extensions/models/litellm_model.py, the current logic builds:

{"effort": "low", "summary": "auto"}

whenever reasoning.summary is present.

That shape is fine for OpenAI-style reasoning controls, but it leaks into all LiteLLM providers. For Anthropic via LiteLLM, this leads to broken behavior downstream: LiteLLM's Anthropic transformer only handles string-valued reasoning_effort, so the value gets ignored instead of becoming thinking.budget_tokens.

I originally filed this in LiteLLM, but after tracing both sides, the primary ownership seems to be here because the Agents SDK is the component deciding to send the OpenAI-only dict shape to non-OpenAI providers.

Related LiteLLM issue for context: BerriAI/litellm#24599

Debug information

  • Agents SDK version: 0.9.3
  • Python version: 3.12.4
  • Also verified the same code path still exists on main at 8fdb45da

Repro steps

Minimal source-level repro:

  1. Configure a LiteLLM-backed non-OpenAI model, for example Anthropic.
  2. Set model_settings=ModelSettings(reasoning=Reasoning(effort="low", summary="auto")).
  3. Use LitellmModel.
  4. Observe that LitellmModel constructs dict-valued reasoning_effort for the provider-agnostic LiteLLM call:
# src/agents/extensions/models/litellm_model.py
if model_settings.reasoning.summary is not None:
    reasoning_effort = {
        "effort": model_settings.reasoning.effort,
        "summary": model_settings.reasoning.summary,
    }

Current upstream location:

  • src/agents/extensions/models/litellm_model.py:446-455 on main

Relevant downstream effect in LiteLLM Anthropic for context:

  • LiteLLM Anthropic only maps reasoning_effort when it is a string in litellm/llms/anthropic/chat/transformation.py
elif param == "reasoning_effort" and isinstance(value, str):
    optional_params["thinking"] = AnthropicConfig._map_reasoning_effort(...)

So the dict shape emitted here is not portable across providers.

Expected behavior

One of these should happen:

  1. LitellmModel should only send dict-valued reasoning_effort for providers/models that explicitly support that OpenAI-style shape.
  2. For non-OpenAI LiteLLM providers, LitellmModel should degrade gracefully by sending only the string effort and dropping unsupported summary.
  3. If neither is possible, the SDK should raise a clear validation error when reasoning.summary is set for a provider path that cannot support it.

The current behavior silently creates a cross-provider mismatch.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions