New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA BFloat16 support of clamp, remainder, lshift, rshift #45247
Conversation
💊 CI failures summary and remediationsAs of commit de95b58 (more details on the Dr. CI page):
XLA failureJob pytorch_xla_linux_bionic_py3_6_clang9_test is failing. Please create an issue with title prefixed by 🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
💊 CI failures summary and remediationsAs of commit 9ae9ad1 (more details on the Dr. CI page): Commit 9ae9ad1 was recently pushed. Waiting for builds... This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 2 times. |
test/test_torch.py
Outdated
('clamp_min', '', _medium_2d, lambda t, d: [1], 1e-2, 1e-2, 1e-5, _types, [torch.bfloat16]), | ||
('clamp_max', '', _medium_2d, lambda t, d: [1], 1e-2, 1e-2, 1e-5, _types, [torch.bfloat16]), | ||
('clamp_min', '', _medium_2d, lambda t, d: [1], 1e-2, 1e-2, 1e-5, | ||
torch.testing.get_all_dtypes(include_complex=False, include_bool=False), [torch.bfloat16]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can pass include_bfloat16=True
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
AT_DISPATCH_FLOATING_TYPES_AND(ScalarType::Half, iter.dtype(), "lshift_cuda", [&]() { | ||
iter.dtype() == ScalarType::Half || | ||
iter.dtype() == ScalarType::BFloat16) { | ||
AT_DISPATCH_FLOATING_TYPES_AND2(ScalarType::Half, ScalarType::BFloat16, iter.dtype(), "lshift_cuda", [&]() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how is this being tested?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is tested by
('__lshift__', '',
lambda t, d: torch.pow(2, torch.arange(1, 5).to(dtype=_convert_t(t, d), device=d)),
lambda t, d: [2],
1e-3, 1e-5, 1e-3, _signed_types, _cpu_types, False),
('__rshift__', '',
lambda t, d: torch.pow(2, torch.arange(3, 7).to(dtype=_convert_t(t, d), device=d)),
lambda t, d: [2],
1e-3, 1e-5, 1e-3, _signed_types, _cpu_types, False),
where the _signed_types
is modified in https://github.com/pytorch/pytorch/pull/45247/files#diff-9996665f82f52030836eb8657057cfadR19601-R19604 to add bfloat16
('remainder', 'value', _small_3d, lambda t, d: [3], 1e-1, 1e-2, 1e-5, _signed_types), | ||
('remainder', 'negative_value', _small_3d, lambda t, d: [-3], 1e-1, 1e-2, 1e-5, _signed_types), | ||
('remainder', 'tensor', _small_3d, | ||
lambda t, d: [_small_3d(t, d, has_zeros=False)], | ||
1e-1, 1e-5, 1e-5, _signed_types), | ||
1e-1, 1e-2, 1e-5, _signed_types), | ||
('remainder', 'negative_tensor', _small_3d, | ||
lambda t, d: [0 - _small_3d(t, d, has_zeros=False)], | ||
1e-1, 1e-5, 1e-5, _signed_types), | ||
1e-1, 1e-2, 1e-5, _signed_types), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
None of these is testing BFloat16 because
_signed_types = [
torch.half, torch.float, torch.double,
torch.int8, torch.short, torch.int, torch.long
]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change is here: https://github.com/pytorch/pytorch/pull/45247/files#diff-9996665f82f52030836eb8657057cfadR19601-R19604
I added bfloat16 to _signed_types
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add bfloat16 tests.
Update description to something more meaningful.
Update title to mention Bfloat16
Codecov Report
@@ Coverage Diff @@
## master #45247 +/- ##
=======================================
Coverage 68.25% 68.25%
=======================================
Files 410 410
Lines 53232 53232
=======================================
+ Hits 36335 36336 +1
+ Misses 16897 16896 -1
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Add CUDA BFloat16 support of clamp, remainder, lshift, rshift