Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Store autocast_gpu_dtype in custom_fwd and custom_bwd for BFloat16 autocast #88029

Closed
wants to merge 2 commits into from

Conversation

crcrpar
Copy link
Collaborator

@crcrpar crcrpar commented Oct 29, 2022

As per #87979, custom_bwd seems to forcefully use torch.float16 for torch.autograd.Function.backward regardless of the dtype used in the forward.

Changes:

  • store the dtype in args[0]
  • update tests to confirm the dtype of intermediate result tensors that are outputs of autocast compatible torch functions

cc @ptrblck @ngimel

Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 29, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/88029

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 4b580fc:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Collaborator

@ngimel ngimel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks for the test cleanup!

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 29, 2022
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
@crcrpar
Copy link
Collaborator Author

crcrpar commented Oct 31, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@crcrpar crcrpar deleted the cuda_custom_fwd_bwd_bf16 branch October 31, 2022 23:20
kulinseth pushed a commit to kulinseth/pytorch that referenced this pull request Nov 5, 2022
…t16 autocast (pytorch#88029)

As per pytorch#87979, `custom_bwd` seems to forcefully use `torch.float16` for `torch.autograd.Function.backward` regardless of the `dtype` used in the forward.

Changes:
- store the `dtype` in `args[0]`
- update tests to confirm the dtype of intermediate result tensors that are outputs of autocast compatible `torch` functions

cc @ptrblck @ngimel
Pull Request resolved: pytorch#88029
Approved by: https://github.com/ngimel
kulinseth pushed a commit to kulinseth/pytorch that referenced this pull request Dec 10, 2022
…t16 autocast (pytorch#88029)

As per pytorch#87979, `custom_bwd` seems to forcefully use `torch.float16` for `torch.autograd.Function.backward` regardless of the `dtype` used in the forward.

Changes:
- store the `dtype` in `args[0]`
- update tests to confirm the dtype of intermediate result tensors that are outputs of autocast compatible `torch` functions

cc @ptrblck @ngimel
Pull Request resolved: pytorch#88029
Approved by: https://github.com/ngimel
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants