-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: half reduction with multiple sub-iterators #85596
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85596
Note: Links to docs will display an error until the docs builds have been completed. ✅ No Failures, 3 PendingAs of commit 7932595: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@ngimel does this make sense? Thanks. |
@@ -3373,6 +3373,15 @@ def to_numpy(input): | |||
|
|||
self.assertEqual(actual, expected, msg, exact_dtype=exact_dtype) | |||
|
|||
@onlyCUDA | |||
@largeTensorTest("8GB") | |||
@dtypes(torch.half, torch.chalf, torch.bfloat16) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bfloat16 does not suffer from this as it has the range same as float.
/easycla As part of the transition to the PyTorch Foundation, this project now requires contributions be covered under the new CLA. See #85559 for additional details. This comment will trigger a new check of this PR. If you are already covered, you will simply see a new "EasyCLA" check that passes. If you are not covered, a bot will leave a new comment with a link to sign. |
@pytorchbot rebase |
@pytorchbot successfully started a rebase job. Check the current status here |
Successfully rebased |
73c912a
to
a223f22
Compare
…half/reductions
a223f22
to
fc76558
Compare
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Hey @kshitij12345. |
@@ -3382,6 +3383,23 @@ def to_numpy(input): | |||
|
|||
self.assertEqual(actual, expected, msg, exact_dtype=exact_dtype) | |||
|
|||
@onlyCUDA | |||
@largeTensorTest("8GB") | |||
@dtypes(torch.half, torch.chalf, torch.bfloat16) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, this test fails on large GPUs with expected "mean_cuda is not implemented for 'ComplexHalf'"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that skip has been added. Thanks for checking and adding skip.
Fixes #74438
TODO: