Skip to content

Conversation

@thesynapses
Copy link

Fixes #3676

MCP tool responses arrive as JSON strings but were being double-serialized by _safe_json_serialize(), creating triple-nested JSON that prevented Claude and GPT from parsing tool results.

Example of the bug:
'{"content": [{"type": "text", "text": "{\n "type"..."}]}'

This fix adds an isinstance(str) check before serialization. If the response is already a string (from MCP or other sources), it's used directly. Otherwise, it's serialized normally.

Impact: Without this fix, agents using LiteLLM with MCP tools would successfully call tools but fail to present results to users, appearing to hang or produce incomplete responses.

Tested with Claude Sonnet 4.5 and GPT-5 via Azure OpenAI with MCP tools (Google Drive, HubSpot CRM) in a production multi-agent system.


Link to Issue or Description of Change

1. Link to an existing issue:

Testing Plan

Problem:
MCP tool responses come as JSON strings. The code called _safe_json_serialize() on these already-serialized strings, causing triple-nested JSON like: '{"content": [{"type": "text", "text": "{\\n \\"type\\"..."}]}'. This prevented Claude/GPT from parsing the tool results.

Solution:
Added isinstance(str) check before serialization in _content_to_message_param() (line 369).
If the response is already a string, use it directly. Otherwise, serialize normally.

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

Manual End-to-End (E2E) Tests:

Setup:

  • Multi-agent system with ADK 1.19.0 + LiteLLM wrapper
  • Claude Sonnet 4.5 via Vertex AI (vertex_ai/claude-sonnet-4-5@20250929)
  • GPT-5 via Azure OpenAI (azure/gpt-5-openai-latest)
  • MCP tools: Google Drive agent, HubSpot CRM agent
  • Gluon Link (quanutmzero) secure MCP gateway for intent-based governance.

Test Cases:

  1. Google Drive: List files, search queries
  2. HubSpot CRM: Company listing

Before Fix:

  • Log line 3355: Triple-nested JSON '{"content": [{"type": "text", "text": "{\\n..."}]}'
  • Tools executed successfully, but results were never displayed to the user
  • Agents appeared to hang after tool calls

After Fix:

  • Clean single-level JSON sent to LiteLLM
  • Tool results are properly parsed and displayed
  • Complete formatted responses (markdown tables) rendered correctly
  • Both Claude and GPT-5 successfully present tool outputs

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

Additional context

This fix is critical for production systems using MCP tools with LiteLLM models. The bug affects any pre-serialized JSON responses, not just MCP. The fix maintains backward compatibility with non-string responses while properly handling already-serialized strings.

Fixes google#3676

MCP tool responses arrive as JSON strings but were being double-serialized
by _safe_json_serialize(), creating triple-nested JSON that prevented Claude
and GPT from parsing tool results.

Example of the bug:
'{"content": [{"type": "text", "text": "{\n \"type\"..."}]}'

This fix adds an isinstance(str) check before serialization. If the response
is already a string (from MCP or other sources), it's used directly. Otherwise,
it's serialized normally.

Impact: Without this fix, agents using LiteLLM with MCP tools would successfully
call tools but fail to present results to users, appearing to hang or produce
incomplete responses.

Tested with Claude Sonnet 4.5 and GPT-5 via Azure OpenAI with MCP tools
(Google Drive, HubSpot CRM) in production multi-agent system.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @thesynapses, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug where tool responses, especially from MCP tools, were being double-serialized into triple-nested JSON structures. This prevented large language models from correctly parsing the tool results, leading to agents appearing unresponsive or failing to display output. The fix introduces a conditional check to ensure that only non-string responses are serialized, thereby guaranteeing proper parsing and display of tool outputs for a smoother agent experience.

Highlights

  • Prevented double JSON serialization: Introduced a check to prevent _safe_json_serialize() from double-serializing already-stringified JSON responses, which previously led to triple-nested JSON.
  • Improved LLM tool parsing: Ensures that large language models like Claude and GPT can correctly parse and display results from tool calls, particularly those from MCP tools, resolving issues where agents appeared to hang.
  • Conditional serialization logic: Implemented an isinstance(str) check for part.function_response.response; if it's already a string, it's used directly, otherwise it undergoes normal serialization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves a critical issue where MCP tool responses were being double-serialized, leading to parsing failures in downstream models. The fix is direct and correct, adding a check to see if the response is already a string before attempting serialization. The pull request description is excellent, clearly detailing the problem, the fix, and the testing performed. I have one suggestion to make the code slightly more concise, but the logic is sound.

Comment on lines +370 to 386
# FIX: Check if response is already a string before serializing.
# MCP tool responses come as JSON strings, but _safe_json_serialize was
# double-serializing them (json.dumps on already-JSON strings), causing
# triple-nested JSON like: '{"content": [{"type": "text", "text": "{\n \"type\"..."}]}'
# This prevented Claude/GPT from parsing tool results correctly.
response_content = (
part.function_response.response
if isinstance(part.function_response.response, str)
else _safe_json_serialize(part.function_response.response)
)
tool_messages.append(
ChatCompletionToolMessage(
role="tool",
tool_call_id=part.function_response.id,
content=_safe_json_serialize(part.function_response.response),
content=response_content,
)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While the logic is correct, this implementation can be made more concise. The temporary response_content variable can be inlined directly into the ChatCompletionToolMessage constructor. Additionally, the detailed comment is great for a pull request description but could be summarized for better long-term readability within the code.

      # MCP tool responses can be pre-serialized JSON strings. Avoid
      # double-serializing them to prevent parsing issues by downstream models.
      tool_messages.append(
          ChatCompletionToolMessage(
              role="tool",
              tool_call_id=part.function_response.id,
              content=(
                  part.function_response.response
                  if isinstance(part.function_response.response, str)
                  else _safe_json_serialize(part.function_response.response)
              ),
          )
      )

@adk-bot adk-bot added the models [Component] Issues related to model support label Nov 23, 2025
@adk-bot
Copy link
Collaborator

adk-bot commented Nov 23, 2025

Response from ADK Triaging Agent

Hello @thesynapses, thank you for creating this PR!

This is a great contribution. Could you please add unit tests for this change? This will help to ensure the quality of the code and prevent regressions.

This information will help reviewers to review your PR more efficiently. Thanks!

@ryanaiagent ryanaiagent self-assigned this Nov 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Double JSON Serialization of MCP Tool Responses in LiteLLM

3 participants