Bug Description
When the chat context contains tool calls, attempting to get an LLM response with tool_choice=None raises the ValidationException shown below. This will happen any time an agent handoff occurs with an agent object and string response both returned by the tool. Agent handoff sets draining=True and the string result triggers an LLM call on the old agent. When constructing the request for that LLM call, draining is True, which forces tool_choice=None.
I suspect this will also happen if session.generate_reply(tool_choice=None) is called when there are tool calls in chat context history already.
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/livekit/plugins/aws/llm.py", line 227, in _run
response = await client.converse_stream(**self._opts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiobotocore/context.py", line 36, in wrapper
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiobotocore/client.py", line 424, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the ConverseStream operation: The toolConfig field must be defined when using toolUse and toolResult content blocks.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/livekit/agents/llm/fallback_adapter.py", line 176, in _try_generate
async for chunk in stream:
File "/usr/local/lib/python3.11/site-packages/livekit/agents/llm/llm.py", line 381, in __anext__
raise exc # noqa: B904
^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/livekit/agents/llm/llm.py", line 194, in _traceable_main_task
await self._main_task()
File "/usr/local/lib/python3.11/site-packages/livekit/agents/llm/llm.py", line 219, in _main_task
return await self._run()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/livekit/plugins/aws/llm.py", line 246, in _run
raise APIConnectionError(
livekit.agents._exceptions.APIConnectionError: aws bedrock llm: error generating content: An error occurred (ValidationException) when calling the ConverseStream operation: The toolConfig field must be defined when using toolUse and toolResult content blocks.
Expected Behavior
- Using tool_choice=None on a conversation with tool calls in the history succeeds.
- Returning an agent object and string result from a tool allows the old agent to generate its reply successfully
Reproduction Steps
from livekit import agents
from livekit.agents import AgentServer, AgentSession, Agent, room_io, function_tool, RunContext
from livekit.plugins import aws
class AssistantOne(Agent):
def __init__(self) -> None:
super().__init__(instructions="You are a helpful voice AI assistant.")
@function_tool()
async def handoff(
self,
context: RunContext,
):
"""Handoff to next agent
"""
return AssistantTwo(), "Transferring the user to SomeAgent"
class AssistantTwo(Agent):
def __init__(self) -> None:
super().__init__(instructions="You are a helpful voice AI assistant.")
server = AgentServer()
@server.rtc_session(agent_name="my-agent")
async def my_agent(ctx: agents.JobContext):
session = AgentSession(
llm=aws.LLM(model="us.anthropic.claude-sonnet-4-6"),
)
session.output.set_audio_enabled(False)
await session.start(
room=ctx.room,
agent=AssistantOne(),
room_options=room_io.RoomOptions(),
)
await session.generate_reply(
instructions="hand off to the next agent immediately."
)
if __name__ == "__main__":
agents.cli.run_app(server)
Operating System
Windows 11
Models Used
Claude Sonnet 4.6 in bedrock
Package Versions
livekit-agents: 1.5.6
livekit-plugins-aws: 1.5.6
Session/Room/Call IDs
No response
Proposed Solution
Additional Context
No response
Screenshots and Recordings
No response
Bug Description
When the chat context contains tool calls, attempting to get an LLM response with tool_choice=None raises the ValidationException shown below. This will happen any time an agent handoff occurs with an agent object and string response both returned by the tool. Agent handoff sets draining=True and the string result triggers an LLM call on the old agent. When constructing the request for that LLM call, draining is True, which forces tool_choice=None.
I suspect this will also happen if
session.generate_reply(tool_choice=None)is called when there are tool calls in chat context history already.Expected Behavior
Reproduction Steps
Operating System
Windows 11
Models Used
Claude Sonnet 4.6 in bedrock
Package Versions
Session/Room/Call IDs
No response
Proposed Solution
Additional Context
No response
Screenshots and Recordings
No response