Skip to content

Bug in langchain adapter: toBaseMessages creates orphaned AIMessages with tool_calls in multi-turn conversations #11415

@tuntisz

Description

@tuntisz

Description

When using @ai-sdk/langchain with LangGraph agents that have tools, multi-turn conversations fail with:

400 An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_xxxxx

Reproduction

Repository: https://github.com/tuntisz/ai-sdk-langchain-bug-repro

git clone https://github.com/tuntisz/ai-sdk-langchain-bug-repro
cd ai-sdk-langchain-bug-repro
pnpm install
pnpm test

The tests fail, showing:

AIMessages with tool_calls: 2
ToolMessages: 1
ORPHANED tool call IDs: [ 'call_8xtoEZ2bDLCMkKhK1wQ1Y3XC' ]

Visual reproduction

  1. Set OPENAI_API_KEY and run pnpm dev
  2. Open http://localhost:3000
  3. Type: do maths with 123 → works
  4. Type: do maths with 345 → works
  5. Type: do maths with 999ERROR

Root Cause

The bug is in toUIMessageStream's handling of LangGraph values events:

  1. When LangGraph emits a values event, it contains the full message history including AIMessages with tool_calls from previous turns

  2. processLangGraphEvent (case "values", ~line 540-560 in adapter.ts) iterates through ALL messages and emits tool-input-start/tool-input-available for any AIMessage with tool_calls that hasn't been emitted in the current stream

  3. The bug: It emits these events for historical tool calls but does NOT emit corresponding tool-output-available events for them

  4. The client builds a UIMessage with tool parts in state input-available (no output)

  5. When toBaseMessages converts this back, it creates an AIMessage with tool_calls but no ToolMessage follows

Evidence from stream

On the third request, the stream shows:

{"type":"start"}
{"type":"tool-input-start","toolCallId":"call_HISTORICAL",...}    <- Re-emitted from history
{"type":"tool-input-available","toolCallId":"call_HISTORICAL",...}
{"type":"start-step"}
{"type":"tool-input-start","toolCallId":"call_CURRENT",...}       <- Current turn
{"type":"tool-input-available","toolCallId":"call_CURRENT",...}
{"type":"tool-output-available","toolCallId":"call_CURRENT",...}  <- Only current gets output!
{"type":"error","errorText":"400 An assistant message with 'tool_calls'..."}

Suggested Fix

In processLangGraphEvent case "values", when iterating through historical messages, either:

  1. Skip emitting tool events for historical tool calls - they're already in the client's message history and shouldn't be re-emitted
  2. Or emit both input AND output events for historical tool calls to keep them consistent

Option 1 seems cleaner since the client already has the complete tool call history.

Environment

  • @ai-sdk/langchain: 2.0.3
  • ai: 6.0.3
  • langchain: 1.2.3
  • @langchain/openai: 1.2.0
  • Node.js: 20.x

Screenshot

Notice that each turn with a tool call sees the prior turns tool call started, but not completed. The request fails on the third message with a tool call.
Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    ai/corecore functions like generateText, streamText, etc. Provider utils, and provider spec.bugSomething isn't working as documented

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions