New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Half support for CPU autocast on eager mode #112484
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/112484
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit f132587 with merge base 5a96a42 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@CaoE Looks like there are some UT failure. |
torch/amp/autocast_mode.py
Outdated
if self.fast_dtype not in supported_dtype and enabled: | ||
error_message = "In CPU autocast, but the target dtype is not supported. Disabling autocast.\n" | ||
error_message += ( | ||
"CPU Autocast only supports dtype of torch.bfloat16 currently." | ||
"CPU Autocast only supports dtype of torch.bfloat16 and torch.float16 currently." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: may be directly read the supported_dtype
to format this message.
@pytorchbot merge |
Merge failedReason: This PR needs a If not, please add the To add a label, you can comment to pytorchbot, for example For more information, see Details for Dev Infra teamRaised by workflow job |
@ezyang This PR needs a |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Add Half support for CPU autocast on eager mode since common operators have Half support on CPU.
#96093.
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5