Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Half support for CPU autocast on eager mode #112484

Closed
wants to merge 1 commit into from
Closed

Conversation

CaoE
Copy link
Collaborator

@CaoE CaoE commented Oct 31, 2023

Add Half support for CPU autocast on eager mode since common operators have Half support on CPU.
#96093.

cc @mcarilli @ptrblck @leslie-fang-intel @jgong5

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 31, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/112484

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit f132587 with merge base 5a96a42 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@CaoE CaoE added module: half Related to float16 half-precision floats ciflow/trunk Trigger trunk jobs on your pull request ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR labels Oct 31, 2023
@leslie-fang-intel
Copy link
Collaborator

@CaoE Looks like there are some UT failure.

if self.fast_dtype not in supported_dtype and enabled:
error_message = "In CPU autocast, but the target dtype is not supported. Disabling autocast.\n"
error_message += (
"CPU Autocast only supports dtype of torch.bfloat16 currently."
"CPU Autocast only supports dtype of torch.bfloat16 and torch.float16 currently."
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: may be directly read the supported_dtype to format this message.

@CaoE CaoE changed the title Add Half support for autocast on CPU Add Half support for CPU autocast on eager mode Nov 21, 2023
@CaoE CaoE marked this pull request as ready for review November 21, 2023 01:23
@CaoE CaoE requested a review from ezyang November 21, 2023 01:24
@ezyang
Copy link
Contributor

ezyang commented Nov 21, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@CaoE
Copy link
Collaborator Author

CaoE commented Nov 21, 2023

@ezyang This PR needs a release notes: label. Do you know which label is suitable ?

@ezyang ezyang added release notes: performance_as_product release notes category topic: new features topic category labels Nov 21, 2023
@ezyang
Copy link
Contributor

ezyang commented Nov 21, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR ciflow/trunk Trigger trunk jobs on your pull request Merged module: amp (automated mixed precision) autocast module: half Related to float16 half-precision floats open source release notes: performance_as_product release notes category topic: new features topic category
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

None yet

7 participants