Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: half reduction with multiple sub-iterators #85596

Closed
wants to merge 14 commits into from

Conversation

kshitij12345
Copy link
Collaborator

@kshitij12345 kshitij12345 commented Sep 24, 2022

Fixes #74438

TODO:

  • Add test

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 24, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85596

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures, 3 Pending

As of commit 7932595:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@kshitij12345 kshitij12345 marked this pull request as ready for review September 26, 2022 13:04
@kshitij12345 kshitij12345 removed the request for review from mruberry September 26, 2022 13:04
@zou3519 zou3519 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Sep 26, 2022
@kshitij12345
Copy link
Collaborator Author

@ngimel does this make sense? Thanks.

@@ -3373,6 +3373,15 @@ def to_numpy(input):

self.assertEqual(actual, expected, msg, exact_dtype=exact_dtype)

@onlyCUDA
@largeTensorTest("8GB")
@dtypes(torch.half, torch.chalf, torch.bfloat16)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bfloat16 does not suffer from this as it has the range same as float.

@facebook-github-bot
Copy link
Contributor

/easycla

As part of the transition to the PyTorch Foundation, this project now requires contributions be covered under the new CLA. See #85559 for additional details.

This comment will trigger a new check of this PR. If you are already covered, you will simply see a new "EasyCLA" check that passes. If you are not covered, a bot will leave a new comment with a link to sign.

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Oct 4, 2022

CLA Signed

The committers listed above are authorized under a signed CLA.

@kshitij12345
Copy link
Collaborator Author

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased fix/half/reductions onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout fix/half/reductions && git pull --rebase)

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 10, 2022
@kshitij12345
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@github-actions
Copy link

Hey @kshitij12345.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@@ -3382,6 +3383,23 @@ def to_numpy(input):

self.assertEqual(actual, expected, msg, exact_dtype=exact_dtype)

@onlyCUDA
@largeTensorTest("8GB")
@dtypes(torch.half, torch.chalf, torch.bfloat16)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, this test fails on large GPUs with expected "mean_cuda is not implemented for 'ComplexHalf'"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that skip has been added. Thanks for checking and adding skip.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request cla signed Merged open source release notes: cuda release notes category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

cuda low-precision reductions on large tensors produce wrong results
7 participants