-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Fix cvtfp32_bf16 #41280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix cvtfp32_bf16 #41280
Conversation
💊 CI failures summary and remediationsAs of commit f42106f (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. This comment has been revised 5 times. |
|
This is cool and nice bonus on the spelling correction, but can you elaborate on why this solves the issue? Also, as for testing, would you like to wait until #41249 goes in? That's the new test suite I'm developing that identified this issue. It should be available in a few days. |
|
@mruberry I've provided the explanation in PR description, haven't I? And as you can see, I'm not adding the test but simply waiting for you to land it on your end. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, but can you please add test with the case which were incorrect before.
|
This can be tested by adding an OpInfo for sign now that #42965 is in. Ping me if you'd like an example. |
7492199 to
d640a68
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
I've added the test, although in the old format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch
test/test_torch.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
device_type = torch.device(device).type
... include_half=(device_type=='cuda')
... include_bfloat16=(device_type=='cpu')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @malfet!
For `Vec256<bfloat16>::blendv()` operator to work correctly, float32 -nan (0xfffffffff) must be convered to bfloat16 (0xffff) -nan TODO: Fix float32 +-nan conversion: i.e. float32 nan (0x7fffffff) must be converted to bfloat16 (0x7fff) nan
d640a68 to
f42106f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Codecov Report
@@ Coverage Diff @@
## master #41280 +/- ##
=======================================
Coverage 80.58% 80.59%
=======================================
Files 1873 1873
Lines 202711 202711
=======================================
+ Hits 163360 163368 +8
+ Misses 39351 39343 -8 |
For
Vec256<bfloat16>::blendv()operator to work correctly, float32 -nan (0xfffffffff) must be converted to bfloat16 -nan (0xffff).But cvtfp32_bf16 converts -nan to nan (0x7fc0)
TODO: Fix float32 +-nan conversion: i.e. float32 nan (0x7fffffff) must be converted to bfloat16 (0x7fff) nan
Closes #41238