Skip to content

perf(turbopack-node): sync zero-copy Buffer transfer from upstream PR #91376#120

Merged
xusd320 merged 3 commits intoutoofrom
feat/sync-zero-copy-buffer-pr
Mar 15, 2026
Merged

perf(turbopack-node): sync zero-copy Buffer transfer from upstream PR #91376#120
xusd320 merged 3 commits intoutoofrom
feat/sync-zero-copy-buffer-pr

Conversation

@xusd320
Copy link
Copy Markdown

@xusd320 xusd320 commented Mar 15, 2026

Syncs changes from upstream PR vercel#91376 into the utoo branch.

Changes

Cherry-picked the two upstream commits:

  • a6ae253 feat(turbopack-node): zero-copy Buffer transfer using Bytes::from_owner
  • 8791718 refactor: use Bytes import instead of fully-qualified bytes::Bytes

Plus an additional fix for the utoo-specific wasm worker (web_worker.rs):

  • Added use bytes::Bytes import
  • Wrapped Uint8Array::to_vec() in Bytes::from() in send_task_message to match the new TaskMessage.data: Bytes type

Summary

Uses Bytes::from_owner(napi::Buffer) to eliminate the Buffer → Vec<u8> memcpy on the JS→Rust path in the worker pool. Unifies the Operation trait to use Bytes for both send and recv.

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the performance of data transfer between JavaScript and Rust components within the Turbopack Node.js worker pool. By adopting a zero-copy strategy for Buffer transfers and standardizing data handling with the Bytes type, it reduces memory allocations and copies, leading to more efficient inter-process communication.

Highlights

  • Zero-copy Buffer Transfer: Implemented zero-copy Buffer transfer from JavaScript to Rust in the worker pool using Bytes::from_owner to eliminate Buffer → Vec<u8> memcpy on the JS→Rust path.
  • Unified Data Handling: Unified the Operation trait to consistently use the Bytes type for both sending and receiving data across the worker pool.
  • WebAssembly Worker Integration: Applied a specific fix for the WebAssembly worker (web_worker.rs) to integrate the new Bytes transfer mechanism, wrapping Uint8Array::to_vec() in Bytes::from().
Changelog
  • Cargo.lock
    • Added the bytes crate as a new dependency.
  • turbopack/crates/turbopack-node/Cargo.toml
    • Declared the bytes crate as a new dependency for the turbopack-node crate.
  • turbopack/crates/turbopack-node/src/evaluate.rs
    • Imported the Bytes type.
    • Modified the Operation trait's recv and send methods to use Bytes instead of Vec<u8>.
    • Updated calls to send to wrap serde_json::to_vec output with Bytes::from.
  • turbopack/crates/turbopack-node/src/process_pool/mod.rs
    • Imported the Bytes type.
    • Adjusted the ChildProcessOperation implementation of the Operation trait to convert Vec<u8> to Bytes on recv and Bytes to Vec<u8> on send.
  • turbopack/crates/turbopack-node/src/worker_pool/operation.rs
    • Imported the Bytes type.
    • Changed the data field in TaskMessage from Vec<u8> to Bytes.
    • Updated MessageChannel types in WorkerPoolOperation and TaskChannels to handle Bytes.
    • Modified recv_task_message_in_worker, send_to_worker, and recv_from_worker to operate with Bytes.
  • turbopack/crates/turbopack-node/src/worker_pool/web_worker.rs
    • Imported the Bytes type.
    • Wrapped the js_sys::Uint8Array::new(&data_js).to_vec() call with Bytes::from() in send_task_message.
  • turbopack/crates/turbopack-node/src/worker_pool/worker_thread.rs
    • Imported the Bytes type.
    • Modified the From<NapiTaskMessage> for TaskMessage implementation to use Bytes::from_owner for zero-copy transfer of napi::Buffer to Bytes.
    • Converted Bytes back to Vec<u8> when creating NapiTaskMessage on receive.
  • turbopack/crates/turbopack-node/tests/pool.rs
    • Updated the send_recv helper function to wrap the message data with bytes::Bytes::from before sending.
Activity
  • No human activity (comments, reviews, etc.) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a performance optimization by enabling zero-copy buffer transfers from JavaScript to Rust using bytes::Bytes and Bytes::from_owner. The core change to unify the Operation trait is well-implemented. However, this unification introduces new memory copies on other code paths, particularly for ChildProcessOperation and for data transfers from Rust back to JavaScript workers. I've added specific comments highlighting these performance regressions for your consideration. Additionally, I've suggested a minor change in a test file to improve code consistency.

Comment on lines +812 to 817
async fn send(&mut self, message: Bytes) -> Result<()> {
self.with_process(|process| async move {
timeout(Duration::from_secs(30), process.send(message))
timeout(Duration::from_secs(30), process.send(message.to_vec()))
.await
.context("timeout while sending message")?
.context("failed to send message")?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The unification of the Operation trait to use bytes::Bytes appears to introduce a performance regression for ChildProcessOperation. The send method now needs to copy the Bytes into a Vec<u8> using message.to_vec(), which adds an allocation and copy for every message sent. Was this trade-off considered? Given this is a performance-focused PR, it would be good to clarify if this regression is acceptable.

@@ -166,7 +167,7 @@ pub async fn recv_task_message_in_worker(worker_id: u32) -> napi::Result<NapiTas
.await?;
Ok(NapiTaskMessage {
task_id,
data: message.into(),
data: Vec::from(message).into(),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This change from message.into() (on what was a Vec<u8>) to Vec::from(message).into() (where message is now Bytes) introduces a copy for data transfer from Rust to the JS worker. While the JS-to-Rust path is now zero-copy, this change regresses performance on the Rust-to-JS path. Is this an intended trade-off?

Comment on lines +76 to +78
op.send(bytes::Bytes::from(serde_json::to_vec(&msg).unwrap()))
.await
.unwrap();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with other files in this PR, it would be better to add use bytes::Bytes; at the top of the file and use Bytes::from(...) here instead of the fully qualified bytes::Bytes::from(...). This aligns with one of the upstream commits (refactor: use Bytes import instead of fully-qualified bytes::Bytes).

Suggested change
op.send(bytes::Bytes::from(serde_json::to_vec(&msg).unwrap()))
.await
.unwrap();
op.send(Bytes::from(serde_json::to_vec(&msg).unwrap()))
.await
.unwrap();

@xusd320 xusd320 force-pushed the feat/sync-zero-copy-buffer-pr branch from a1ebf5f to 8ce8349 Compare March 15, 2026 03:08
Cherry-picked from PR vercel#91376:
- a6ae253 feat(turbopack-node): zero-copy Buffer transfer using Bytes::from_owner
- 8791718 refactor: use Bytes import instead of fully-qualified bytes::Bytes

Use Bytes::from_owner(napi::Buffer) to eliminate the Buffer → Vec<u8>
memcpy on the JS→Rust path in the worker pool. Unifies the Operation
trait to use Bytes for both send and recv.
@xusd320 xusd320 force-pushed the feat/sync-zero-copy-buffer-pr branch from 8ce8349 to d247ae4 Compare March 15, 2026 03:11
@xusd320 xusd320 merged commit 9ea6932 into utoo Mar 15, 2026
11 of 26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant