-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
support kl_div function with bfloat16 #77375
Labels
enhancement
Not as big of a feature, but technically not a bug. Should be easy to fix
module: bfloat16
module: nn
Related to torch.nn
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Comments
cc @ptrblck, can someone send a PR implementing this please? |
@Aidyn-A could you take a look at this, please? |
Sure, I will take care of this. |
facebook-github-bot
pushed a commit
that referenced
this issue
May 26, 2022
Summary: This PR adds a feature requested in issue #77375. `kl_div_backward_cuda` now supports `bfloat16` cc ngimel ptrblck rosrad Pull Request resolved: #77676 Approved by: https://github.com/jbschlosser Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/f37ce948ff25bf70a52ab4b327c82925ddb4aa86 Reviewed By: mehtanirav Differential Revision: D36668740 fbshipit-source-id: 0f171ac2fdb66931f0a7ffe73a97517fe2abad02
Thanks a lot for the timely support. |
Closing as addressed in #77676. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
enhancement
Not as big of a feature, but technically not a bug. Should be easy to fix
module: bfloat16
module: nn
Related to torch.nn
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃殌 The feature, motivation and pitch
We are trying to switch our model training from the fp16 to bfloat16 precision. This change can be good for the CE loss functions, however it failed on the kl_div function due to non implementation.
here is the error log:
"kl_div_backward_cuda" not implemented for 'BFloat16'
Alternatives
No response
Additional context
No response
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
The text was updated successfully, but these errors were encountered: