Skip to content

[BUG] JSON schema issue between CrewAI MCPServerAdapter and Bedrock LLM Claude / Gemini when using Tool Inputs #4472

@FrancisPrakash

Description

@FrancisPrakash

Description

I have a crewai script which uses MCP Tools, when using Bedrock LLM / Gemini, I'm getting issue related to JSON Schema.

Steps to Reproduce

Shared the code snippet

Expected behavior

None

Screenshots/Code snippets

from crewai import Agent, Task, Crew, Process, LLM
from crewai_tools import MCPServerAdapter
from dotenv import load_dotenv
import os

load_dotenv()

'''
llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
temperature=0,
)
'''

llm = LLM(
model="gemini/gemini-2.5-flash", # or gemini/gemini-2.0-flash
api_key=os.getenv("GEMINI_API_KEY"),
)

server_params = {
"url": "http://10.10.10.10/mcp",
"transport": "streamable-http",
}

mcp_server_adapter = MCPServerAdapter(server_params)
mcp_tools = mcp_server_adapter.tools

my_agent = Agent(
role="Firewall Configuration Analyst",
goal="Use MCP tools to analyze and optimize firewall configurations.",
backstory="An AI agent that specializes in firewall management using MCP tools.",
tools=mcp_tools,
verbose=True,
llm=llm,
max_iter=3, # Limit to 3 iterations
)

my_task = Task(
name="Firewall Analysis Task",
description=(
"{input} - From the User's input, gather info about the firewall hostname/IP address"
"And identify what is required from firewall and get the appropriate Fortigate cli commands."
"and execute it using the tool which is more relevant. "
"You MUST call use the appropriate MCP Tools with:\n"
"- hostname: the firewall IP address\n"
"- command: the CLI command string (optional)\n"
"Show what is being sent to MCP Tool\n"
"Do not respond without calling the tool.\n"
),
expected_output="Detailed Analysis of the firewall configuration based on the User's input and the results from the MCP tools.",
agent=my_agent,
)
crew = Crew(
name="Firewall Configuration Crew",
agents=[my_agent],
tasks=[my_task],
verbose=True,
process=Process.sequential,
)

try:
result = crew.kickoff(
inputs={
"input": "Firewall IP: 10.10.10.10. I want to check the current firewall status."
}
)
print(result)
except Exception as e:
print(f"Error during crew kickoff: {e}")

Operating System

Ubuntu 20.04

Python Version

3.12

crewAI Version

1.9.3

crewAI Tools Version

1.9.3

Virtual Environment

Venv

Evidence

Error which I observer,

When using Gemini LLM

ERROR:root:Google Gemini API error: 400 - value at properties.command must be a list
An unknown error occurred. Please check the details below.
Error details: 400 INVALID_ARGUMENT. {'error': {'code': 400, 'message': 'value at properties.command must be a list', 'status': 'INVALID_ARGUMENT'}}

When using AWS Bedrock CLaude LLM

ERROR:root:AWS Bedrock ClientError (ValidationException): The model returned the following errors: tools.0.input_schema: JSON schema is invalid - please consult https://json-schema.org or our documentation at https://docs.anthropic.com/en/docs/tool-use

Possible Solution

None

Additional context

None

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions