Describe the bug
Once a chat using Think's workspace read tool grows past the default keepRecent=4, truncateOlderMessages (packages/agents/src/experimental/memory/utils/compaction.ts) rewrites each older tool part's output: { … } to a string of the form "<JSON.stringify(original).slice(0, maxToolOutput)>… [truncated N chars]". The structured object shape is destroyed. On the next turn, read.toModelOutput (packages/think/src/tools/workspace.ts) immediately runs "error" in output on that string and throws synchronously:
TypeError: Cannot use 'in' operator to search for 'error' in {"path":"/foo.ts","content":"…"}... [truncated 6823 chars]
The throw originates inside the AI SDK's tool-output materialization, so the model request never goes out and every subsequent inference attempt for that chat stalls until history is cleared.
This is preserved by the work in #1456 — that PR's description explicitly notes truncation still trims tool outputs >500 chars in messages older than the last 4, so the trigger condition for this crash is unchanged after it lands.
To Reproduce
Steps to reproduce the behavior:
- Build a Think agent that exposes the workspace
read tool.
- Send a chat message that causes the agent to
read a file with ≥ ~500 chars of content early in the conversation.
- Send ≥ 5 more turns (enough to push the
read tool call out of the keepRecent=4 window).
- Send one more user message.
- See
TypeError: Cannot use 'in' operator to search for 'error' in <string> thrown from read.toModelOutput. The chat stalls; subsequent inference attempts repeat the same throw.
Expected behavior
read.toModelOutput should tolerate a non-object output value and emit it as a plain text part — the same shape the AI SDK's default createToolModelOutput produces for a tool that doesn't define a custom toModelOutput. Truncation of an older read result should not be able to brick the chat.
A root-cause alternative (or complement) would be to have truncateOlderMessages preserve a structured shape (e.g. { truncated: true, summary: "<…>" }) rather than replacing output with a raw string, so any tool's toModelOutput keeps seeing the type its union claims.
Screenshots
N/A — failure is a synchronous TypeError in the inference loop, no UI artifact.
Version:
@cloudflare/think@0.5.3
agents@0.12.3
ai@6.0.170
Additional context
Proposed fix (small, defensive — what we ship locally as a Bun patch): add a non-object guard at the top of read.toModelOutput:
toModelOutput: async ({ input, output }) => {
if (typeof output !== "object" || output === null) {
return {
type: "text",
value: typeof output === "string" ? output : String(output),
};
}
if ("error" in output) { /* … existing logic … */ }
// …
},
Happy to send a PR — the change is ~10 lines in packages/think/src/tools/workspace.ts. The guard is worth doing independently of any compaction-side change: any tool author whose toModelOutput checks property membership on its output today crashes on the in operator with no helpful error.
Runtime: Cloudflare Workers.
Describe the bug
Once a chat using Think's workspace
readtool grows past the defaultkeepRecent=4,truncateOlderMessages(packages/agents/src/experimental/memory/utils/compaction.ts) rewrites each older tool part'soutput: { … }to a string of the form"<JSON.stringify(original).slice(0, maxToolOutput)>… [truncated N chars]". The structured object shape is destroyed. On the next turn,read.toModelOutput(packages/think/src/tools/workspace.ts) immediately runs"error" in outputon that string and throws synchronously:The throw originates inside the AI SDK's tool-output materialization, so the model request never goes out and every subsequent inference attempt for that chat stalls until history is cleared.
This is preserved by the work in #1456 — that PR's description explicitly notes truncation still trims tool outputs >500 chars in messages older than the last 4, so the trigger condition for this crash is unchanged after it lands.
To Reproduce
Steps to reproduce the behavior:
readtool.reada file with ≥ ~500 chars of content early in the conversation.readtool call out of thekeepRecent=4window).TypeError: Cannot use 'in' operator to search for 'error' in <string>thrown fromread.toModelOutput. The chat stalls; subsequent inference attempts repeat the same throw.Expected behavior
read.toModelOutputshould tolerate a non-objectoutputvalue and emit it as a plain text part — the same shape the AI SDK's defaultcreateToolModelOutputproduces for a tool that doesn't define a customtoModelOutput. Truncation of an olderreadresult should not be able to brick the chat.A root-cause alternative (or complement) would be to have
truncateOlderMessagespreserve a structured shape (e.g.{ truncated: true, summary: "<…>" }) rather than replacingoutputwith a raw string, so any tool'stoModelOutputkeeps seeing the type its union claims.Screenshots
N/A — failure is a synchronous
TypeErrorin the inference loop, no UI artifact.Version:
@cloudflare/think@0.5.3agents@0.12.3ai@6.0.170Additional context
Proposed fix (small, defensive — what we ship locally as a Bun patch): add a non-object guard at the top of
read.toModelOutput:Happy to send a PR — the change is ~10 lines in
packages/think/src/tools/workspace.ts. The guard is worth doing independently of any compaction-side change: any tool author whosetoModelOutputchecks property membership on itsoutputtoday crashes on theinoperator with no helpful error.Runtime: Cloudflare Workers.