Skip to content

Think: read.toModelOutput throws TypeError on truncated tool output, stalling all further inference #1498

@rwdaigle

Description

@rwdaigle

Describe the bug

Once a chat using Think's workspace read tool grows past the default keepRecent=4, truncateOlderMessages (packages/agents/src/experimental/memory/utils/compaction.ts) rewrites each older tool part's output: { … } to a string of the form "<JSON.stringify(original).slice(0, maxToolOutput)>… [truncated N chars]". The structured object shape is destroyed. On the next turn, read.toModelOutput (packages/think/src/tools/workspace.ts) immediately runs "error" in output on that string and throws synchronously:

TypeError: Cannot use 'in' operator to search for 'error' in {"path":"/foo.ts","content":"…"}... [truncated 6823 chars]

The throw originates inside the AI SDK's tool-output materialization, so the model request never goes out and every subsequent inference attempt for that chat stalls until history is cleared.

This is preserved by the work in #1456 — that PR's description explicitly notes truncation still trims tool outputs >500 chars in messages older than the last 4, so the trigger condition for this crash is unchanged after it lands.

To Reproduce

Steps to reproduce the behavior:

  1. Build a Think agent that exposes the workspace read tool.
  2. Send a chat message that causes the agent to read a file with ≥ ~500 chars of content early in the conversation.
  3. Send ≥ 5 more turns (enough to push the read tool call out of the keepRecent=4 window).
  4. Send one more user message.
  5. See TypeError: Cannot use 'in' operator to search for 'error' in <string> thrown from read.toModelOutput. The chat stalls; subsequent inference attempts repeat the same throw.

Expected behavior

read.toModelOutput should tolerate a non-object output value and emit it as a plain text part — the same shape the AI SDK's default createToolModelOutput produces for a tool that doesn't define a custom toModelOutput. Truncation of an older read result should not be able to brick the chat.

A root-cause alternative (or complement) would be to have truncateOlderMessages preserve a structured shape (e.g. { truncated: true, summary: "<…>" }) rather than replacing output with a raw string, so any tool's toModelOutput keeps seeing the type its union claims.

Screenshots

N/A — failure is a synchronous TypeError in the inference loop, no UI artifact.

Version:

  • @cloudflare/think@0.5.3
  • agents@0.12.3
  • ai@6.0.170

Additional context

Proposed fix (small, defensive — what we ship locally as a Bun patch): add a non-object guard at the top of read.toModelOutput:

toModelOutput: async ({ input, output }) => {
  if (typeof output !== "object" || output === null) {
    return {
      type: "text",
      value: typeof output === "string" ? output : String(output),
    };
  }
  if ("error" in output) { /* … existing logic … */ }
  // …
},

Happy to send a PR — the change is ~10 lines in packages/think/src/tools/workspace.ts. The guard is worth doing independently of any compaction-side change: any tool author whose toModelOutput checks property membership on its output today crashes on the in operator with no helpful error.

Runtime: Cloudflare Workers.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions