-
Notifications
You must be signed in to change notification settings - Fork 49.6k
[Flight] Fix detached ArrayBuffer
error when streaming typed arrays
#34849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Flight] Fix detached ArrayBuffer
error when streaming typed arrays
#34849
Conversation
Comparing: 1324e1b...9aa952a Critical size changesIncludes critical production bundles, as well as any change greater than 2%:
Significant size changesIncludes any change greater than 0.2%: (No significant changes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's discuss alternatives. We explicitly try to avoid cloning large arrays.
It's a little unfortunate in the case of just passing a raw value that isn't part of for example a Response, Blob or a Stream.
But for example, one shot iterables are also consumed in a similar way for performance. If you pass a one shot iterable or ReadableStream it also gets consumed by the algorithm.
Maybe we should clone at least debug values as typed arrays since they're not actually part of it but things passed as props to a client component for example should be able to be consumed.
Regardless the fix is likely to clone at specific types of values in specific contexts (e.g. when serializing a TypedArray but not the content of a Blob) and not at the level of the stream.
Using `renderToReadableStream` in Node.js with binary data from `fs.readFileSync` or `Buffer.allocUnsafe` could cause downstream consumers (like compression middleware) to fail with "Cannot perform Construct on a detached ArrayBuffer". The issue occurs because Node.js uses an 8192-byte Buffer pool for small allocations (< 4KB). When React's `VIEW_SIZE` was 2KB, files between ~2KB and 4KB would be passed through as views of pooled buffers rather than copied into `currentView`. ByteStreams (`type: 'bytes'`) detach ArrayBuffers during transfer, which corrupts the shared Buffer pool and causes subsequent Buffer operations to fail. Increasing `VIEW_SIZE` from 2KB to 4KB ensures all chunks smaller than 4KB are copied into `currentView` (which uses a dedicated 4KB buffer outside the pool), while chunks 4KB or larger don't use the pool anyway. No pooled buffers are ever exposed to ByteStream detachment. This adds 2KB memory per active stream, copies chunks in the 2-4KB range instead of passing them as views (small CPU cost), and buffers up to 2KB more data before flushing. However, it avoids duplicating large binary data (which copying everything would require, like the Edge entry point currently does in `typedArrayToBinaryChunk`).
72e2b4b
to
3d1193a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should ideally also update the edge one to not clone and see what happens there.
Using
renderToReadableStream
in Node.js with binary data fromfs.readFileSync
(orBuffer.allocUnsafe
) could cause downstream consumers (like compression middleware) to fail with "Cannot perform Construct on a detached ArrayBuffer".The issue occurs because Node.js uses an 8192-byte Buffer pool for small allocations (< 4KB). When React's
VIEW_SIZE
was 2KB, files between ~2KB and 4KB would be passed through as views of pooled buffers rather than copied intocurrentView
. ByteStreams (type: 'bytes'
) detach ArrayBuffers during transfer, which corrupts the shared Buffer pool and causes subsequent Buffer operations to fail.Increasing
VIEW_SIZE
from 2KB to 4KB ensures all chunks smaller than 4KB are copied intocurrentView
(which uses a dedicated 4KB buffer outside the pool), while chunks 4KB or larger don't use the pool anyway. Thus no pooled buffers are ever exposed to ByteStream detachment.This adds 2KB memory per active stream, copies chunks in the 2-4KB range instead of passing them as views (small CPU cost), and buffers up to 2KB more data before flushing. However, it avoids duplicating large binary data (which copying everything would require, like the Edge entry point currently does in
typedArrayToBinaryChunk
).Related issues: