-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Use fbgemm
for quantize/dequantize ops
#19500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Differential Revision: D15014561 Differential Version: 80114220
Differential Revision: D15014561 Differential Version: 80202432
Differential Revision: D15014561 Differential Version: 80202583
tensor.options().dtype(at::kQInt8), | ||
intrusive_from_this()); | ||
auto qvd = qv.data<qint8>(); | ||
tensor.contiguous(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch. Would be good to have a test for this case.
Change looks fine to me. Resigning in favor of other reviewers. |
Is FBGEMM codepath being exercised in CI? What about the non-FBGEMM codepath? |
Differential Revision: D15014561 Differential Version: 80387485
I am not sure if CI sets the |
This pull request has been merged in 3cc60e5. |
Summary: Pull Request resolved: pytorch/pytorch#19500 Changes the `quantize_linear` and `dequantize` to `fbgemm`-based implementation. Reviewed By: jianyuh, jerryzh168 Differential Revision: D15014561 fbshipit-source-id: b651e69d336b5b08b4a75a4a4eddf46c040a4934
Summary: Pull Request resolved: pytorch#19500 Changes the `quantize_linear` and `dequantize` to `fbgemm`-based implementation. Reviewed By: jianyuh, jerryzh168 Differential Revision: D15014561 fbshipit-source-id: b651e69d336b5b08b4a75a4a4eddf46c040a4934
Stack:
:black_circle: #19500 Use
fbgemm
for quantize/dequantize ops 💚Changes the
quantize_linear
anddequantize
tofbgemm
-based implementation.Differential Revision: D15014561