Skip to content

Conversation

uwedolinsky
Copy link
Contributor

Using the tbb::parallel_for api to enqueue NativeCPU kernel invocations when oneTBB is enabled

…_ops_eventswait' into uwe/fasternativecpuenqueue_async_ops_eventswait_onetbb_merge
…_ops_eventswait' into uwe/fasternativecpuenqueue_async_ops_eventswait_onetbb_merge
…queue_async_ops_eventswait_onetbb_merge_parallelfor_exp
@uwedolinsky uwedolinsky marked this pull request as ready for review September 24, 2025 13:49
@uwedolinsky uwedolinsky requested a review from a team as a code owner September 24, 2025 13:49
Comment on lines +284 to 285
auto thread_id = getTBBThreadID();
task(thread_id);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto thread_id = getTBBThreadID();
task(thread_id);
task(getTBBThreadID());

The variable doesn't seem like it's still needed. Likewise in the other file.

using tbb_nd_executor = nativecpu_tbb_executor;

template <template <class> class RangeTpl, class... T>
static inline void invoke_tbb_parallel_for(const tbb_nd_executor &tbb_ex,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be clearer to remove this function and call tbb::parallel_for directly in the other invoke_tbb_parallel_for overload.

IndexT groupsPerThread;
size_t dim = 0;
for (size_t t = 0; t < 3; t++)
groupsPerThread[t] = numWG[t] / numParallelThreads;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is confusing, groupsPerThread is an array but after initialization, only a single element of that array is used.

More to the point, in #19550, for non-oneTBB, I simplified the splitting across threads to be done over the linear range, rather than over any specific dimension, and that would probably be better with oneTBB as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants