-
Notifications
You must be signed in to change notification settings - Fork 25.2k
[ROCm] torch.cuda.is_bf16_supported() returns True #80410
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful links
✅ No Failures (0 Pending)As of commit 390f75c (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
This is NOT good :( , I dont know how I missed it. Thanks @jeffdaily. |
This issue was report by an internal user trying to run a model which is FP32 and converting it to BF16 and encountered an error using above API. Didnt get any UT actively using this API for ROCm case and it is used in TEST_CUDA scenario. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. CI failures not related to this change. Rebase requested to aid in merging.
@pytorchbot rebase |
You don't have permissions to rebase this PR, only the PR author and pytorch organization members may rebase this PR. |
@pytorchbot rebase |
@pytorchbot successfully started a rebase job. Check the current status here |
Rebase failed due to Command
Raised by https://github.com/pytorch/pytorch/actions/runs/2778258239 |
b405ff6
to
79cdf1f
Compare
79cdf1f
to
390f75c
Compare
@malfet , |
@pytorchbot merge |
@pytorchbot successfully started a merge job. Check the current status here |
Hey @pruthvistony. |
Summary: `torch.cuda.is_bf16_supported()` return False on ROCm which is not correct, since BF16 is supported on all AMD GPU arch - gfx906, gfx908 and gfx90a. cc jithunnair-amd Pull Request resolved: #80410 Approved by: https://github.com/jeffdaily, https://github.com/malfet Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/b57188760be857a9d4c49b5dfa2efd1f78c06af8 Reviewed By: kit1980 Differential Revision: D38394982 fbshipit-source-id: 036dbaa9eb1b3e62ca3dcaf0b61127dc4d981f32
`torch.cuda.is_bf16_supported()` return False on ROCm which is not correct, since BF16 is supported on all AMD GPU arch - gfx906, gfx908 and gfx90a. cc @jithunnair-amd Pull Request resolved: pytorch#80410 Approved by: https://github.com/jeffdaily, https://github.com/malfet
`torch.cuda.is_bf16_supported()` return False on ROCm which is not correct, since BF16 is supported on all AMD GPU arch - gfx906, gfx908 and gfx90a. cc @jithunnair-amd Pull Request resolved: pytorch#80410 Approved by: https://github.com/jeffdaily, https://github.com/malfet
`torch.cuda.is_bf16_supported()` return False on ROCm which is not correct, since BF16 is supported on all AMD GPU arch - gfx906, gfx908 and gfx90a. cc @jithunnair-amd Pull Request resolved: pytorch#80410 Approved by: https://github.com/jeffdaily, https://github.com/malfet
`torch.cuda.is_bf16_supported()` return False on ROCm which is not correct, since BF16 is supported on all AMD GPU arch - gfx906, gfx908 and gfx90a. cc @jithunnair-amd Pull Request resolved: pytorch#80410 Approved by: https://github.com/jeffdaily, https://github.com/malfet
`torch.cuda.is_bf16_supported()` return False on ROCm which is not correct, since BF16 is supported on all AMD GPU arch - gfx906, gfx908 and gfx90a. cc @jithunnair-amd Pull Request resolved: pytorch#80410 Approved by: https://github.com/jeffdaily, https://github.com/malfet
torch.cuda.is_bf16_supported()
return False on ROCm which is not correct, since BF16 is supported on all AMD GPU arch - gfx906, gfx908 and gfx90a.cc @jithunnair-amd