-
Notifications
You must be signed in to change notification settings - Fork 407
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix running parallel_reduce with TeamPolicy for large ranges #4532
Fix running parallel_reduce with TeamPolicy for large ranges #4532
Conversation
0161f58
to
f3d7c24
Compare
Retest this please. |
9872d3a
to
c411444
Compare
This seems to work on all backends now. Note that I had to change the argument type of |
The changes for |
47d559c
to
95961f5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Fixes #4531. Specifically, the lines
fix the issue (for
CUDA
andHIP
). The other changes to the types ofnwork
are just for consistency.