fix(ai): skip stringifying text when streaming partial text#14123
Merged
aayush-kapoor merged 5 commits intomainfrom Apr 7, 2026
Merged
fix(ai): skip stringifying text when streaming partial text#14123aayush-kapoor merged 5 commits intomainfrom
aayush-kapoor merged 5 commits intomainfrom
Conversation
lgrammel
reviewed
Apr 7, 2026
| if (result !== undefined) { | ||
| // only send new json if it has changed: | ||
| const currentJson = JSON.stringify(result.partial); | ||
| const currentJson = |
Collaborator
There was a problem hiding this comment.
worth adding a comment why the logic is here
Collaborator
There was a problem hiding this comment.
also this can lead to downstream issues, because currentJson is not JSON in those cases (the variable name is now misleading)
lgrammel
previously approved these changes
Apr 7, 2026
lgrammel
approved these changes
Apr 7, 2026
vercel-ai-sdk bot
pushed a commit
that referenced
this pull request
Apr 7, 2026
## Background #13839 streamText with the default text output called JSON.stringify on the full accumulated text on every single streaming chunk that creates increasingly large string copies per stream casuing memory issues ## Summary - skip `JSON.stringify` when the partial output is already a string - structured outputs still go through stringify as before since they need serialization to compare. - compare the text directly - no extra full-string serialization per chunk ## Manual Verification tried reproducing via <details> <summary>repro</summary> ```ts import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; import { run } from '../../lib/run'; run(async () => { const result = streamText({ model: openai.responses('gpt-4o-mini'), prompt: 'Write an extremely detailed 5000-word essay about the history of computing. Include every detail you can.', }); let chunks = 0; for await (const textPart of result.textStream) { chunks++; if (chunks % 100 === 0) { const mb = (process.memoryUsage().heapUsed / 1024 / 1024).toFixed(1); console.log(`chunk ${chunks} — heap: ${mb}MB`); } } console.log(`\nTotal chunks: ${chunks}`); console.log( `Final heap: ${(process.memoryUsage().heapUsed / 1024 / 1024).toFixed(1)}MB`, ); }); ``` </details> ## Checklist - [x] Tests have been added / updated (for bug fixes / features) - [ ] Documentation has been added / updated (for bug fixes / features) - [x] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [x] I have reviewed this pull request (self-review) ## Related Issues fixes #13839
Contributor
|
✅ Backport PR created: #14200 |
vercel-ai-sdk bot
added a commit
that referenced
this pull request
Apr 7, 2026
…#14200) This is an automated backport of #14123 to the release-v6.0 branch. FYI @aayush-kapoor Co-authored-by: Aayush Kapoor <83492835+aayush-kapoor@users.noreply.github.com>
gr2m
pushed a commit
that referenced
this pull request
Apr 7, 2026
## Background #13839 streamText with the default text output called JSON.stringify on the full accumulated text on every single streaming chunk that creates increasingly large string copies per stream casuing memory issues ## Summary - skip `JSON.stringify` when the partial output is already a string - structured outputs still go through stringify as before since they need serialization to compare. - compare the text directly - no extra full-string serialization per chunk ## Manual Verification tried reproducing via <details> <summary>repro</summary> ```ts import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; import { run } from '../../lib/run'; run(async () => { const result = streamText({ model: openai.responses('gpt-4o-mini'), prompt: 'Write an extremely detailed 5000-word essay about the history of computing. Include every detail you can.', }); let chunks = 0; for await (const textPart of result.textStream) { chunks++; if (chunks % 100 === 0) { const mb = (process.memoryUsage().heapUsed / 1024 / 1024).toFixed(1); console.log(`chunk ${chunks} — heap: ${mb}MB`); } } console.log(`\nTotal chunks: ${chunks}`); console.log( `Final heap: ${(process.memoryUsage().heapUsed / 1024 / 1024).toFixed(1)}MB`, ); }); ``` </details> ## Checklist - [x] Tests have been added / updated (for bug fixes / features) - [ ] Documentation has been added / updated (for bug fixes / features) - [x] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [x] I have reviewed this pull request (self-review) ## Related Issues fixes #13839
Contributor
|
🚀 Published in:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Background
#13839
streamText with the default text output called JSON.stringify on the full accumulated text on every single streaming chunk that creates increasingly large string copies per stream casuing memory issues
Summary
JSON.stringifywhen the partial output is already a stringManual Verification
tried reproducing via
repro
Checklist
pnpm changesetin the project root)Related Issues
fixes #13839