Skip to content

Conversation

@xusd320
Copy link

@xusd320 xusd320 commented Jan 18, 2026

Sync optimization from vercel#86266

@gemini-code-assist
Copy link

Summary of Changes

Hello @xusd320, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on a performance optimization within the turbopack-node crate, specifically targeting worker communication. By pre-allocating dedicated message channels for each task, the system avoids repeated lookups in shared data structures, leading to more efficient message passing. Additionally, it clarifies the precedence of the worker_pool feature over the process_pool when both are configured.

Highlights

  • Worker Pool Prioritization: The worker_pool feature now takes precedence over process_pool when both are enabled, ensuring a consistent and preferred execution environment for worker tasks. This is enforced via conditional compilation attributes.
  • Optimized Worker Communication: Introduced TaskChannels to pre-allocate message channels for individual tasks. This eliminates the need for HashMap lookups during each message send/receive operation, significantly reducing communication overhead between the main thread and worker threads.
  • Refactored Communication API: The internal API for sending and receiving messages to/from workers has been refactored. Global utility functions for message handling have been removed and replaced with direct usage of the TaskChannels instance associated with each WorkerOperation.
  • Code Cleanup: Removed #[allow(unused)] attributes from several NAPI-bound functions and structs, indicating that these components are now actively used or no longer trigger unused warnings.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a performance optimization for worker communication by pre-allocating TaskChannels. This is a solid improvement that avoids repeated hash map lookups in the hot path of sending and receiving messages. The refactoring moves channel acquisition to a one-time operation per task, encapsulating the logic within a new TaskChannels struct which also handles cleanup via its Drop implementation. Additionally, the changes clarify the precedence of the worker_pool feature over process_pool when both are enabled. My review highlights a couple of areas where the use of blocking locks in an asynchronous context could still pose a performance concern under high contention, and I've suggested potential improvements.

Comment on lines +209 to +234
impl TaskChannels {
/// Create and register channels for a new task.
/// Channels are inserted into the global maps so JS workers can find them.
pub(crate) fn new(task_id: u32, worker_id: u32) -> Self {
let worker_channel = {
let mut map = WORKER_POOL_OPERATION.worker_routed_channel.lock();
map.entry(worker_id)
.or_insert_with(|| Arc::new(MessageChannel::unbounded()))
.clone()
};

let task_channel = {
let mut map = WORKER_POOL_OPERATION.task_routed_channel.lock();
map.entry(task_id)
.or_insert_with(|| Arc::new(MessageChannel::unbounded()))
.clone()
};

Self {
worker_channel,
task_channel,
task_id,
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The new function for TaskChannels uses blocking lock() calls to access worker_routed_channel and task_routed_channel. Since this function is called from an async context (WorkerThreadPool::evaluate), these blocking calls can stall the executor's thread if the locks are contended. While this change moves the locking out of the hot send/recv path (which is a great performance improvement!), the blocking nature of the lock itself remains. For even better performance and responsiveness in a highly concurrent scenario, consider using a concurrent hash map (like dashmap) which would allow for non-blocking or more granularly locked access.

Comment on lines +251 to 261
impl Drop for TaskChannels {
fn drop(&mut self) {
// Only remove task channel, worker channel is shared across tasks
WORKER_POOL_OPERATION
.task_routed_channel
.lock()
.remove(&self.task_id);
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The drop implementation for TaskChannels uses a blocking lock() on task_routed_channel. Since TaskChannels is part of WorkerOperation which is used in an async context, this drop can be called from within an async task. A blocking call here can stall the executor's thread if the lock is contended, which could impact performance and responsiveness. While this blocking behavior existed before this refactoring in WorkerOperation::drop, it's worth considering alternatives to avoid blocking in async code, such as using a concurrent hash map (like dashmap) for task_routed_channel.

…d TaskChannels

- Make worker_pool take priority when both features are enabled
- Change cfg conditions from #[cfg(feature = "process_pool")] to
  #[cfg(all(feature = "process_pool", not(feature = "worker_pool")))]
- Pre-allocate TaskChannels to avoid HashMap lookups during send/recv
- Remove unused #[allow(unused)] attributes on napi exports
- Clean up unused helper functions (send_message_to_worker, recv_task_message, etc.)
@xusd320 xusd320 force-pushed the perf/turbopack-node-task-channels branch from 54dabf5 to 1a7fe0d Compare January 18, 2026 10:37
@xusd320 xusd320 merged commit a50e682 into utoo Jan 18, 2026
13 of 26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants