New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove set_quantizer_ from native_functions.yaml #49463
Conversation
set_quantizer_ takes a ConstQuantizerPtr argument, which is neither supported by JIT nor by c10. Also, it doesn't get dispatched (CPU and CUDA have the same implementation) and it is excluded from python bindings generation. So there is no real reason why this needs to be in native_functions.yaml Removing it unblocks the migration to c10-fullness since this is an op that would have been hard to migrate. See https://fb.quip.com/QRtJAin66lPN Differential Revision: [D25587763](https://our.internmc.facebook.com/intern/diff/D25587763/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D25587763/)! [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit ad23f5d (more details on the Dr. CI page):
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. This comment has been revised 4 times. |
@@ -225,4 +225,6 @@ CAFFE2_API Tensor new_qtensor( | |||
const TensorOptions& options, | |||
QuantizerPtr quantizer); | |||
|
|||
CAFFE2_API void set_quantizer_(const Tensor& self, ConstQuantizerPtr quantizer); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, you didn't have to funcify this; you could have just readded the method to manual TensorBody.h. But this is fine too.
This pull request has been merged in 26974e6. |
Summary: Pull Request resolved: pytorch#49463 set_quantizer_ takes a ConstQuantizerPtr argument, which is neither supported by JIT nor by c10. Also, it doesn't get dispatched (CPU and CUDA have the same implementation) and it is excluded from python bindings generation. So there is no real reason why this needs to be in native_functions.yaml Removing it unblocks the migration to c10-fullness since this is an op that would have been hard to migrate. See https://fb.quip.com/QRtJAin66lPN ghstack-source-id: 118710663 Test Plan: waitforsandcastle Reviewed By: ezyang Differential Revision: D25587763 fbshipit-source-id: 8fab921f4c256c128d48d82dac731f04ec9bad92
Stack from ghstack:
use_c10_dispatcher: full
lines #49259 Removeuse_c10_dispatcher: full
linesset_quantizer_ takes a ConstQuantizerPtr argument, which is neither supported by JIT nor by c10.
Also, it doesn't get dispatched (CPU and CUDA have the same implementation) and it is excluded from python bindings generation.
So there is no real reason why this needs to be in native_functions.yaml
Removing it unblocks the migration to c10-fullness since this is an op that would have been hard to migrate. See https://fb.quip.com/QRtJAin66lPN
Differential Revision: D25587763
NOTE FOR REVIEWERS: This PR has internal Facebook specific changes or comments, please review them on Phabricator!