-
Notifications
You must be signed in to change notification settings - Fork 25.5k
Create a quantized in-palce version CUDA ReLU function, relu_quantized_cuda_. #85670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…d_cuda_. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85670
Note: Links to docs will display an error until the docs builds have been completed. ✅ No Failures, 1 PendingAs of commit a37676b: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…lu_quantized_cuda_." [ghstack-poisoned]
…lu_quantized_cuda_." Summary: this and #85669 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py [ghstack-poisoned]
…lu_quantized_cuda_." Summary: this and #85669 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py [ghstack-poisoned]
…lu_quantized_cuda_." Summary: this and #85669 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py [ghstack-poisoned]
…lu_quantized_cuda_." Summary: this and #85669 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py [ghstack-poisoned]
can you link the previous PR (the one with the review history) in the summary |
] | ||
devices = ["cpu", "cuda"] if TEST_CUDA else ["cpu"] | ||
for device in devices: | ||
# Only test the non-in-place version relu quantized cuda, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thought I left a review comment here but can't find it. I'm wondering why we add this in the previous PR and remove it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the previous PR, we only have non-in-palce qrelu, so I add this for only test non-in-palce qrelu, avoid in-palce qrelu.
@ pytorchbot rebase |
@pytorchbot rebase |
@pytorchbot successfully started a rebase job. Check the current status here |
Rebase failed due to
|
Summary: this and #85670 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py Previous PR that has been reverted: #85502. Pull Request resolved: #85669 Approved by: https://github.com/dzdang
@pytorchbot rebase |
@pytorchbot successfully started a rebase job. Check the current status here |
Rebase failed due to
|
@pytorchbot merge |
@pytorchbot successfully started a merge job. Check the current status here. |
Hey @fufeisi. |
Summary: this and #85670 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py Previous PR that has been reverted: #85502. Pull Request resolved: #85669 Approved by: https://github.com/dzdang
…d_cuda_. (#85670) Summary: this and #85669 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda. Test Plan: python test/test_quantization.py Previous PR that has been reverted: #85502. Pull Request resolved: #85670 Approved by: https://github.com/dzdang, https://github.com/z-a-f
Stack from ghstack (oldest at bottom):
Summary:
this and Create a quantized non-in-palce version CUDA ReLU function, #85669 are to allow the relu function to run on a quantized tensor on cuda. That is torch.relu(qa) for a quantized tensor qa on cuda.
Test Plan:
python test/test_quantization.py
Previous PR that has been reverted: #85502.