♻️ Reproduction Steps
- Set up a CopilotKit v2
<CopilotChat> with an AG-UI HttpAgent backend that has server-side tools (e.g., MCP tools)
- Send a message that triggers a tool call (e.g., "What's the weather in Seattle?")
- Agent streams:
TOOL_CALL_START → TOOL_CALL_ARGS → TOOL_CALL_END → TOOL_CALL_RESULT → TEXT_MESSAGE_* → RUN_FINISHED
- First turn completes successfully — user sees the tool result and text response
- Send a second message (e.g., "What is 5 * 33?")
- Backend receives the conversation history and returns 400 Bad Request from the LLM provider
Root Cause
When CopilotKit replays the conversation history on the second turn, it sends the messages array to the AG-UI backend without the tool result message. The history looks like:
user: "What's the weather in Seattle?"
assistant: [tool_calls: [{id: "call_abc", function: {name: "get_weather", ...}}]]
assistant: "The weather in Seattle is 15°C and partly cloudy."
user: "What is 5 * 33?"
The correct history should be:
user: "What's the weather in Seattle?"
assistant: [tool_calls: [{id: "call_abc", function: {name: "get_weather", ...}}]]
tool: {tool_call_id: "call_abc", content: "Seattle: 15°C, partly cloudy"}
assistant: "The weather in Seattle is 15°C and partly cloudy."
user: "What is 5 * 33?"
The tool result message for call_abc is completely absent. This causes both OpenAI and Azure OpenAI to reject the request with a 400 error because tool_calls in the assistant message have no corresponding tool result.
✅ Expected Behavior
CopilotKit should persist TOOL_CALL_RESULT events in its internal conversation state and include the corresponding tool role messages when replaying history to the backend on subsequent turns.
❌ Actual Behavior
Tool result messages are dropped from the conversation state. On the second turn, the backend receives assistant(tool_calls) followed directly by assistant(text) with no tool(result) in between. The LLM provider (OpenAI/Azure OpenAI) rejects this with a 400 error.
Error from Azure OpenAI Responses API:
Code: agent_run_error_event
Message: An internal error has occurred while streaming events.
The underlying cause is a 400 from the LLM because the message history violates the constraint that every tool_calls entry must have a matching tool result message.
Workaround
We subclass AgentFrameworkAgent on the backend to inject synthetic tool result dicts into the raw messages before they enter the library pipeline:
class ToolGapFixAgent(AgentFrameworkAgent):
async def run(self, input_data):
messages = input_data.get("messages")
if messages:
input_data = {**input_data, "messages": fix_tool_gaps(messages)}
async for event in super().run(input_data):
yield event
Where fix_tool_gaps scans for assistant messages with tool_calls not followed by a tool result, and injects synthetic ones.
Environment
@copilotkit/react-core: 1.55.3
@ag-ui/client: 0.0.52
agent-framework-ag-ui: 1.0.0b260311 (Python backend)
LLM provider: Azure OpenAI (Responses API)
Transport: AG-UI HttpAgent via agents__unsafe_dev_only prop
Frontend Setup
const agent = new HttpAgent({ url: "/ag-ui" });
<CopilotKit
runtimeUrl="/ag-ui"
agent="my-agent"
agents__unsafe_dev_only={{ "my-agent": agent }}
useSingleEndpoint={false}
>
<CopilotChat />
</CopilotKit>
Related Issues
♻️ Reproduction Steps
<CopilotChat>with an AG-UIHttpAgentbackend that has server-side tools (e.g., MCP tools)TOOL_CALL_START→TOOL_CALL_ARGS→TOOL_CALL_END→TOOL_CALL_RESULT→TEXT_MESSAGE_*→RUN_FINISHEDRoot Cause
When CopilotKit replays the conversation history on the second turn, it sends the messages array to the AG-UI backend without the tool result message. The history looks like:
The correct history should be:
The
toolresult message forcall_abcis completely absent. This causes both OpenAI and Azure OpenAI to reject the request with a 400 error becausetool_callsin the assistant message have no correspondingtoolresult.✅ Expected Behavior
CopilotKit should persist
TOOL_CALL_RESULTevents in its internal conversation state and include the correspondingtoolrole messages when replaying history to the backend on subsequent turns.❌ Actual Behavior
Tool result messages are dropped from the conversation state. On the second turn, the backend receives
assistant(tool_calls)followed directly byassistant(text)with notool(result)in between. The LLM provider (OpenAI/Azure OpenAI) rejects this with a 400 error.Error from Azure OpenAI Responses API:
The underlying cause is a 400 from the LLM because the message history violates the constraint that every
tool_callsentry must have a matchingtoolresult message.Workaround
We subclass
AgentFrameworkAgenton the backend to inject synthetic tool result dicts into the raw messages before they enter the library pipeline:Where
fix_tool_gapsscans forassistantmessages withtool_callsnot followed by atoolresult, and injects synthetic ones.Environment
Frontend Setup
Related Issues
tool_useids were found withouttool_resultblocks Anthropic error #2504 —tool_useids withouttool_resultblocks (different root cause but same symptom)