Skip to content

[BUG] An error occurred (ValidationException) when calling the Converse operation: This model doesn't support the toolConfig.toolChoice.any field. Remove toolConfig.toolChoice.any and try again #1241

@azaylamba

Description

@azaylamba

Checks

  • I have updated to the lastest minor and patch version of Strands
  • I have checked the documentation and this is not expected behavior
  • I have searched ./issues and there are no duplicates of my issue

Strands Version

1.18.0

Python Version

3.13

Operating System

macOS

Installation Method

pip

Steps to Reproduce

  1. Use structured output model
  2. Use Llama models using BedrockModel class
  3. When the model doesn't return the output in structured format for the first time, this error is thrown

Expected Behavior

Validation error should not be thrown in case of structured output

Actual Behavior

/Users/ajay/Documents/GitHub/azaylamba/strands-sdk-python/tests_integ/test_structured_output_bedrock_llama_models.py::TestBedrockLlamaModelsToolUsageWithStructuredOutput::test_multi_turn_calculator_tool_use_with_structured_output failed: agent = <strands.agent.agent.Agent object at 0x1095882f0>
invocation_state = {'agent': <strands.agent.agent.Agent object at 0x1095882f0>, 'event_loop_cycle_id': UUID('3a436be6-4599-4bf4-8f3a-d525...ce_state=[], is_remote=False)), 'event_loop_cycle_trace': <strands.telemetry.metrics.Trace object at 0x108de5fd0>, ...}
structured_output_context = <strands.tools.structured_output._structured_output_context.StructuredOutputContext object at 0x10958a270>

    async def event_loop_cycle(
        agent: "Agent",
        invocation_state: dict[str, Any],
        structured_output_context: StructuredOutputContext | None = None,
    ) -> AsyncGenerator[TypedEvent, None]:
        """Execute a single cycle of the event loop.
    
        This core function processes a single conversation turn, handling model inference, tool execution, and error
        recovery. It manages the entire lifecycle of a conversation turn, including:
    
        1. Initializing cycle state and metrics
        2. Checking execution limits
        3. Processing messages with the model
        4. Handling tool execution requests
        5. Managing recursive calls for multi-turn tool interactions
        6. Collecting and reporting metrics
        7. Error handling and recovery
    
        Args:
            agent: The agent for which the cycle is being executed.
            invocation_state: Additional arguments including:
    
                - request_state: State maintained across cycles
                - event_loop_cycle_id: Unique ID for this cycle
                - event_loop_cycle_span: Current tracing Span for this cycle
            structured_output_context: Optional context for structured output management.
    
        Yields:
            Model and tool stream events. The last event is a tuple containing:
    
                - StopReason: Reason the model stopped generating (e.g., "tool_use")
                - Message: The generated message from the model
                - EventLoopMetrics: Updated metrics for the event loop
                - Any: Updated request state
    
        Raises:
            EventLoopException: If an error occurs during execution
            ContextWindowOverflowException: If the input is too large for the model
        """
        structured_output_context = structured_output_context or StructuredOutputContext()
    
        # Initialize cycle state
        invocation_state["event_loop_cycle_id"] = uuid.uuid4()
    
        # Initialize state and get cycle trace
        if "request_state" not in invocation_state:
            invocation_state["request_state"] = {}
        attributes = {"event_loop_cycle_id": str(invocation_state.get("event_loop_cycle_id"))}
        cycle_start_time, cycle_trace = agent.event_loop_metrics.start_cycle(attributes=attributes)
        invocation_state["event_loop_cycle_trace"] = cycle_trace
    
        yield StartEvent()
        yield StartEventLoopEvent()
    
        # Create tracer span for this event loop cycle
        tracer = get_tracer()
        cycle_span = tracer.start_event_loop_cycle_span(
            invocation_state=invocation_state, messages=agent.messages, parent_span=agent.trace_span
        )
        invocation_state["event_loop_cycle_span"] = cycle_span
    
        # Skipping model invocation if in interrupt state as interrupts are currently only supported for tool calls.
        if agent._interrupt_state.activated:
            stop_reason: StopReason = "tool_use"
            message = agent._interrupt_state.context["tool_use_message"]
        # Skip model invocation if the latest message contains ToolUse
        elif _has_tool_use_in_latest_message(agent.messages):
            stop_reason = "tool_use"
            message = agent.messages[-1]
        else:
            model_events = _handle_model_execution(
                agent, cycle_span, cycle_trace, invocation_state, tracer, structured_output_context
            )
            async for model_event in model_events:
                if not isinstance(model_event, ModelStopReason):
                    yield model_event
    
            stop_reason, message, *_ = model_event["stop"]
            yield ModelMessageEvent(message=message)
    
        try:
            if stop_reason == "max_tokens":
                """
                Handle max_tokens limit reached by the model.
    
                When the model reaches its maximum token limit, this represents a potentially unrecoverable
                state where the model's response was truncated. By default, Strands fails hard with an
                MaxTokensReachedException to maintain consistency with other failure types.
                """
                raise MaxTokensReachedException(
                    message=(
                        "Agent has reached an unrecoverable state due to max_tokens limit. "
                        "For more information see: "
                        "https://strandsagents.com/latest/user-guide/concepts/agents/agent-loop/#maxtokensreachedexception"
                    )
                )
    
            if stop_reason == "tool_use":
                # Handle tool execution
                tool_events = _handle_tool_execution(
                    stop_reason,
                    message,
                    agent=agent,
                    cycle_trace=cycle_trace,
                    cycle_span=cycle_span,
                    cycle_start_time=cycle_start_time,
                    invocation_state=invocation_state,
                    tracer=tracer,
                    structured_output_context=structured_output_context,
                )
>               async for tool_event in tool_events:

src/strands/event_loop/event_loop.py:189: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
src/strands/event_loop/event_loop.py:532: in _handle_tool_execution
    async for event in events:
src/strands/event_loop/event_loop.py:277: in recurse_event_loop
    async for event in events:
src/strands/event_loop/event_loop.py:237: in event_loop_cycle
    async for typed_event in events:
src/strands/event_loop/event_loop.py:277: in recurse_event_loop
    async for event in events:
src/strands/event_loop/event_loop.py:152: in event_loop_cycle
    async for model_event in model_events:
src/strands/event_loop/event_loop.py:396: in _handle_model_execution
    raise e
src/strands/event_loop/event_loop.py:337: in _handle_model_execution
    async for event in stream_messages(
src/strands/event_loop/streaming.py:457: in stream_messages
    async for event in process_stream(chunks, start_time):
src/strands/event_loop/streaming.py:391: in process_stream
    async for chunk in chunks:
src/strands/models/bedrock.py:665: in stream
    await task
/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/threads.py:25: in to_thread
    return await loop.run_in_executor(None, func_call)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/concurrent/futures/thread.py:59: in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/Users/ajay/Library/Application Support/hatch/env/virtual/strands-agents/OioNP-Ga/strands-agents/lib/python3.13/site-packages/opentelemetry/instrumentation/threading/__init__.py:171: in wrapped_func
    return original_func(*func_args, **func_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
src/strands/models/bedrock.py:786: in _stream
    raise e
src/strands/models/bedrock.py:735: in _stream
    response = self.client.converse(**request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/Users/ajay/Library/Application Support/hatch/env/virtual/strands-agents/OioNP-Ga/strands-agents/lib/python3.13/site-packages/botocore/client.py:602: in _api_call
    return self._make_api_call(operation_name, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/Users/ajay/Library/Application Support/hatch/env/virtual/strands-agents/OioNP-Ga/strands-agents/lib/python3.13/site-packages/botocore/context.py:123: in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <botocore.client.BedrockRuntime object at 0x1095881a0>
operation_name = 'Converse'
api_params = {'inferenceConfig': {'maxTokens': 2048}, 'messages': [{'content': [{'text': 'Calculate 2 + 2 using the calculator tool...":2,"b":2}}.'}], 'role': 'assistant'}, ...], 'modelId': 'us.meta.llama4-maverick-17b-instruct-v1:0', 'system': [], ...}

    @with_current_context()
    def _make_api_call(self, operation_name, api_params):
        operation_model = self._service_model.operation_model(operation_name)
        service_name = self._service_model.service_name
        history_recorder.record(
            'API_CALL',
            {
                'service': service_name,
                'operation': operation_name,
                'params': api_params,
            },
        )
        if operation_model.deprecated:
            logger.debug(
                'Warning: %s.%s() is deprecated', service_name, operation_name
            )
        request_context = {
            'client_region': self.meta.region_name,
            'client_config': self.meta.config,
            'has_streaming_input': operation_model.has_streaming_input,
            'auth_type': operation_model.resolved_auth_type,
            'unsigned_payload': operation_model.unsigned_payload,
            'auth_options': self._service_model.metadata.get('auth'),
        }
    
        api_params = self._emit_api_params(
            api_params=api_params,
            operation_model=operation_model,
            context=request_context,
        )
        (
            endpoint_url,
            additional_headers,
            properties,
        ) = self._resolve_endpoint_ruleset(
            operation_model, api_params, request_context
        )
        if properties:
            # Pass arbitrary endpoint info with the Request
            # for use during construction.
            request_context['endpoint_properties'] = properties
        request_dict = self._convert_to_request_dict(
            api_params=api_params,
            operation_model=operation_model,
            endpoint_url=endpoint_url,
            context=request_context,
            headers=additional_headers,
        )
        resolve_checksum_context(request_dict, operation_model, api_params)
    
        service_id = self._service_model.service_id.hyphenize()
        handler, event_response = self.meta.events.emit_until_response(
            f'before-call.{service_id}.{operation_name}',
            model=operation_model,
            params=request_dict,
            request_signer=self._request_signer,
            context=request_context,
        )
    
        if event_response is not None:
            http, parsed_response = event_response
        else:
            maybe_compress_request(
                self.meta.config, request_dict, operation_model
            )
            apply_request_checksum(request_dict)
            http, parsed_response = self._make_request(
                operation_model, request_dict, request_context
            )
    
        self.meta.events.emit(
            f'after-call.{service_id}.{operation_name}',
            http_response=http,
            parsed=parsed_response,
            model=operation_model,
            context=request_context,
        )
    
        if http.status_code >= 300:
            error_info = parsed_response.get("Error", {})
            error_code = request_context.get(
                'error_code_override'
            ) or error_info.get("Code")
            error_class = self.exceptions.from_code(error_code)
>           raise error_class(parsed_response, operation_name)
E           botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the Converse operation: This model doesn't support the toolConfig.toolChoice.any field. Remove toolConfig.toolChoice.any and try again.
E           └ Bedrock region: us-east-1
E           └ Model id: us.meta.llama4-maverick-17b-instruct-v1:0

/Users/ajay/Library/Application Support/hatch/env/virtual/strands-agents/OioNP-Ga/strands-agents/lib/python3.13/site-packages/botocore/client.py:1078: ValidationException

The above exception was the direct cause of the following exception:

self = <tests_integ.test_structured_output_bedrock_llama_models.TestBedrockLlamaModelsToolUsageWithStructuredOutput object at 0x108dd8550>

    def test_multi_turn_calculator_tool_use_with_structured_output(self):
        """Test tool usage with structured output."""
        model = BedrockModel(
            model_id="us.meta.llama4-maverick-17b-instruct-v1:0",
            region_name="us-east-1",
            max_tokens=2048,
            streaming=False,
        )
        agent = Agent(model=model, tools=[calculator])
    
>       result = agent("Calculate 2 + 2 using the calculator tool", structured_output_model=MathResult)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

tests_integ/test_structured_output_bedrock_llama_models.py:55: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
src/strands/agent/agent.py:349: in __call__
    return run_async(
src/strands/_async.py:33: in run_async
    return future.result()
           ^^^^^^^^^^^^^^^
/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/concurrent/futures/_base.py:456: in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/concurrent/futures/_base.py:401: in __get_result
    raise self._exception
/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/concurrent/futures/thread.py:59: in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/Users/ajay/Library/Application Support/hatch/env/virtual/strands-agents/OioNP-Ga/strands-agents/lib/python3.13/site-packages/opentelemetry/instrumentation/threading/__init__.py:171: in wrapped_func
    return original_func(*func_args, **func_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
src/strands/_async.py:28: in execute
    return asyncio.run(execute_async())
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py:195: in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py:118: in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py:719: in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
src/strands/_async.py:25: in execute_async
    return await async_func()
           ^^^^^^^^^^^^^^^^^^
src/strands/agent/agent.py:392: in invoke_async
    async for event in events:
src/strands/agent/agent.py:588: in stream_async
    async for event in events:
src/strands/agent/agent.py:636: in _run_loop
    async for event in events:
src/strands/agent/agent.py:684: in _execute_event_loop_cycle
    async for event in events:
src/strands/event_loop/event_loop.py:189: in event_loop_cycle
    async for tool_event in tool_events:
src/strands/event_loop/event_loop.py:532: in _handle_tool_execution
    async for event in events:
src/strands/event_loop/event_loop.py:277: in recurse_event_loop
    async for event in events:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

agent = <strands.agent.agent.Agent object at 0x1095882f0>
invocation_state = {'agent': <strands.agent.agent.Agent object at 0x1095882f0>, 'event_loop_cycle_id': UUID('3a436be6-4599-4bf4-8f3a-d525...ce_state=[], is_remote=False)), 'event_loop_cycle_trace': <strands.telemetry.metrics.Trace object at 0x108de5fd0>, ...}
structured_output_context = <strands.tools.structured_output._structured_output_context.StructuredOutputContext object at 0x10958a270>

    async def event_loop_cycle(
        agent: "Agent",
        invocation_state: dict[str, Any],
        structured_output_context: StructuredOutputContext | None = None,
    ) -> AsyncGenerator[TypedEvent, None]:
        """Execute a single cycle of the event loop.
    
        This core function processes a single conversation turn, handling model inference, tool execution, and error
        recovery. It manages the entire lifecycle of a conversation turn, including:
    
        1. Initializing cycle state and metrics
        2. Checking execution limits
        3. Processing messages with the model
        4. Handling tool execution requests
        5. Managing recursive calls for multi-turn tool interactions
        6. Collecting and reporting metrics
        7. Error handling and recovery
    
        Args:
            agent: The agent for which the cycle is being executed.
            invocation_state: Additional arguments including:
    
                - request_state: State maintained across cycles
                - event_loop_cycle_id: Unique ID for this cycle
                - event_loop_cycle_span: Current tracing Span for this cycle
            structured_output_context: Optional context for structured output management.
    
        Yields:
            Model and tool stream events. The last event is a tuple containing:
    
                - StopReason: Reason the model stopped generating (e.g., "tool_use")
                - Message: The generated message from the model
                - EventLoopMetrics: Updated metrics for the event loop
                - Any: Updated request state
    
        Raises:
            EventLoopException: If an error occurs during execution
            ContextWindowOverflowException: If the input is too large for the model
        """
        structured_output_context = structured_output_context or StructuredOutputContext()
    
        # Initialize cycle state
        invocation_state["event_loop_cycle_id"] = uuid.uuid4()
    
        # Initialize state and get cycle trace
        if "request_state" not in invocation_state:
            invocation_state["request_state"] = {}
        attributes = {"event_loop_cycle_id": str(invocation_state.get("event_loop_cycle_id"))}
        cycle_start_time, cycle_trace = agent.event_loop_metrics.start_cycle(attributes=attributes)
        invocation_state["event_loop_cycle_trace"] = cycle_trace
    
        yield StartEvent()
        yield StartEventLoopEvent()
    
        # Create tracer span for this event loop cycle
        tracer = get_tracer()
        cycle_span = tracer.start_event_loop_cycle_span(
            invocation_state=invocation_state, messages=agent.messages, parent_span=agent.trace_span
        )
        invocation_state["event_loop_cycle_span"] = cycle_span
    
        # Skipping model invocation if in interrupt state as interrupts are currently only supported for tool calls.
        if agent._interrupt_state.activated:
            stop_reason: StopReason = "tool_use"
            message = agent._interrupt_state.context["tool_use_message"]
        # Skip model invocation if the latest message contains ToolUse
        elif _has_tool_use_in_latest_message(agent.messages):
            stop_reason = "tool_use"
            message = agent.messages[-1]
        else:
            model_events = _handle_model_execution(
                agent, cycle_span, cycle_trace, invocation_state, tracer, structured_output_context
            )
            async for model_event in model_events:
                if not isinstance(model_event, ModelStopReason):
                    yield model_event
    
            stop_reason, message, *_ = model_event["stop"]
            yield ModelMessageEvent(message=message)
    
        try:
            if stop_reason == "max_tokens":
                """
                Handle max_tokens limit reached by the model.
    
                When the model reaches its maximum token limit, this represents a potentially unrecoverable
                state where the model's response was truncated. By default, Strands fails hard with an
                MaxTokensReachedException to maintain consistency with other failure types.
                """
                raise MaxTokensReachedException(
                    message=(
                        "Agent has reached an unrecoverable state due to max_tokens limit. "
                        "For more information see: "
                        "https://strandsagents.com/latest/user-guide/concepts/agents/agent-loop/#maxtokensreachedexception"
                    )
                )
    
            if stop_reason == "tool_use":
                # Handle tool execution
                tool_events = _handle_tool_execution(
                    stop_reason,
                    message,
                    agent=agent,
                    cycle_trace=cycle_trace,
                    cycle_span=cycle_span,
                    cycle_start_time=cycle_start_time,
                    invocation_state=invocation_state,
                    tracer=tracer,
                    structured_output_context=structured_output_context,
                )
                async for tool_event in tool_events:
                    yield tool_event
    
                return
    
            # End the cycle and return results
            agent.event_loop_metrics.end_cycle(cycle_start_time, cycle_trace, attributes)
            if cycle_span:
                tracer.end_event_loop_cycle_span(
                    span=cycle_span,
                    message=message,
                )
        except EventLoopException as e:
            if cycle_span:
                tracer.end_span_with_error(cycle_span, str(e), e)
    
            # Don't yield or log the exception - we already did it when we
            # raised the exception and we don't need that duplication.
            raise
        except (ContextWindowOverflowException, MaxTokensReachedException) as e:
            # Special cased exceptions which we want to bubble up rather than get wrapped in an EventLoopException
            if cycle_span:
                tracer.end_span_with_error(cycle_span, str(e), e)
            raise e
        except Exception as e:
            if cycle_span:
                tracer.end_span_with_error(cycle_span, str(e), e)
    
            # Handle any other exceptions
            yield ForceStopEvent(reason=e)
            logger.exception("cycle failed")
>           raise EventLoopException(e, invocation_state["request_state"]) from e
E           strands.types.exceptions.EventLoopException: An error occurred (ValidationException) when calling the Converse operation: This model doesn't support the toolConfig.toolChoice.any field. Remove toolConfig.toolChoice.any and try again.

src/strands/event_loop/event_loop.py:220: EventLoopException

Additional Context

The following integration test fails intermittently as the structured output model behavior is dependent on model output.

"""
Comprehensive integration tests for structured output passed into the agent functionality.
"""

from pydantic import BaseModel, Field

from strands import Agent
from strands.models.bedrock import BedrockModel
from strands.tools import tool


class MathResult(BaseModel):
    """Math operation result."""

    operation: str = Field(description="the performed operation")
    result: int = Field(description="the result of the operation")


# ========== Tool Definitions ==========


@tool
def calculator(operation: str, a: float, b: float) -> float:
    """Simple calculator tool for testing."""
    if operation == "add":
        return a + b
    elif operation == "subtract":
        return a - b
    elif operation == "multiply":
        return a * b
    elif operation == "divide":
        return b / a if a != 0 else 0
    elif operation == "power":
        return a**b
    else:
        return 0


# ========== Test Classes ==========


class TestBedrockLlamaModelsToolUsageWithStructuredOutput:
    """Test structured output with tool usage."""

    def test_multi_turn_calculator_tool_use_with_structured_output(self):
        """Test tool usage with structured output."""
        model = BedrockModel(
            model_id="us.meta.llama4-maverick-17b-instruct-v1:0",
            region_name="us-east-1",
            max_tokens=2048,
            streaming=False,
        )
        agent = Agent(model=model, tools=[calculator])

        result = agent("Calculate 2 + 2 using the calculator tool", structured_output_model=MathResult)

        assert result.structured_output is not None
        assert isinstance(result.structured_output, MathResult)
        assert result.structured_output.result == 4
        # Check that tool was called
        assert result.metrics.tool_metrics is not None
        assert len(result.metrics.tool_metrics) > 0
        result = agent("What is 5 multiplied by 3? Use the calculator tool.", structured_output_model=MathResult)
        assert result.structured_output is not None

Possible Solution

No response

Related Issues

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions