Skip to content

fix: preserve Gemini thought_signature in LiteLLM multi-turn tool calls#2129

Merged
opieter-aws merged 1 commit intostrands-agents:mainfrom
opieter-aws:pr-1982
Apr 16, 2026
Merged

fix: preserve Gemini thought_signature in LiteLLM multi-turn tool calls#2129
opieter-aws merged 1 commit intostrands-agents:mainfrom
opieter-aws:pr-1982

Conversation

@opieter-aws
Copy link
Copy Markdown
Contributor

Closes #1764

Co-authored-by: giulio-leone giulio-leone@users.noreply.github.com

Description

Supersedes #1982, which was accidentally closed after a bad rebase wiped the commit history.

Preserves Gemini thought_signature through multi-turn tool calls when using the LiteLLM model provider. LiteLLM encodes the signature into the tool call ID using a __thought__ separator; this PR extracts it into reasoningSignature on inbound chunks and re-encodes it on outbound request messages.

Changes to src/strands/models/litellm.py:

  • Override format_chunk to extract thought signatures from tool call content_start events
  • Override format_request_message_tool_call to re-encode reasoningSignature back into the tool call ID
  • Add _extract_thought_signature static helper

Related Issues

Documentation PR

N/A — bugfix with no public API changes.

Type of Change

Bug fix

Testing

  • 8 new unit tests covering signature extraction, encoding, double-encode prevention, and full round-trip

  • 1 new integration test for Gemini thinking model tool calls (requires GOOGLE_API_KEY)

  • All 48 existing litellm unit tests pass

  • I ran hatch run prepare

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works - [x] I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 15, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Comment thread src/strands/models/litellm.py
Comment thread src/strands/models/litellm.py
Comment thread src/strands/models/litellm.py
Comment thread tests_integ/models/test_model_litellm.py
Comment thread tests_integ/models/test_model_litellm.py
@github-actions
Copy link
Copy Markdown

Assessment: Comment

Clean, well-structured bugfix that correctly preserves Gemini thought signatures through the LiteLLM multi-turn tool call pipeline. The dual extraction strategy (structured field + ID-based fallback) is a pragmatic approach given the dependency on LiteLLM internals.

Review Details
  • Consistency: Minor **kwargs forwarding gap on the fallback super().format_chunk() call — pre-existing but worth fixing while in the area.
  • Maintainability: The hardcoded _THOUGHT_SIGNATURE_SEPARATOR mirrors a LiteLLM internal constant; adding a reference to the source constant would help future maintenance.
  • Integration test robustness: The assertion checks for specific tool output strings in the model's natural language response, which could be fragile; asserting successful completion may be more resilient.

Well-documented code with thorough unit tests covering extraction, encoding, double-encode prevention, and round-trip scenarios.

Comment thread src/strands/models/litellm.py
Comment thread src/strands/models/litellm.py
@github-actions
Copy link
Copy Markdown

Assessment: Comment

Well-scoped bugfix that correctly preserves Gemini thought_signature through the LiteLLM multi-turn tool call pipeline. The dual extraction strategy (structured provider_specific_fields → ID-based fallback) is a pragmatic design, and the round-trip is well integrated with the existing reasoningSignature plumbing in the streaming layer.

Review Details
  • Bug: New code on line 305 drops **kwargs when calling super().format_chunk(event). Since the method signature accepts **kwargs for extensibility, these should be forwarded.
  • Maintainability: The hardcoded _THOUGHT_SIGNATURE_SEPARATOR mirrors a LiteLLM internal constant. Adding a specific source reference would help future maintenance when LiteLLM evolves.

Test coverage is thorough — 8 unit tests covering extraction, encoding, double-encode prevention, and full round-trip, plus a well-guarded integration test.

Unshure
Unshure previously approved these changes Apr 15, 2026
Comment thread tests_integ/models/test_model_litellm.py Outdated
Comment thread src/strands/models/litellm.py
Closes strands-agents#1764

Co-authored-by: giulio-leone <giulio-leone@users.noreply.github.com>
Comment thread src/strands/models/litellm.py
@github-actions
Copy link
Copy Markdown

Assessment: Comment

Well-scoped bugfix with solid test coverage and clean integration with the existing reasoningSignature pipeline. The latest revision correctly addresses Unshure's CI feedback by removing skipif and adding GOOGLE_API_KEY to required_providers.

Review Details
  • ID consistency: When _extract_thought_signature extracts the signature from provider_specific_fields only (ID has no separator), format_request_message_tool_call encodes the signature into the assistant message's tool call ID, but the inherited format_request_tool_message still uses the original un-encoded toolUseId for the tool result. This creates a potential mismatch. In practice LiteLLM always sets both, so this is a latent risk rather than a current bug — but worth documenting or guarding against.
  • Naming convention: Confirmed _ prefix on @staticmethod is consistent with repo conventions (_validate_gemini_tools, _format_tool_choice, _get_default_model_with_warning).

The dual extraction strategy, double-encode prevention, and round-trip test give good confidence in the fix.

@opieter-aws opieter-aws enabled auto-merge (squash) April 16, 2026 16:30
@opieter-aws opieter-aws merged commit 6697d12 into strands-agents:main Apr 16, 2026
20 of 21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] missing thought signature preservation for thinking models with LiteLLM Model Provider

3 participants