-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[quant][fx] Update name of packed weight attributes #51259
Conversation
Summary: Store the FQN of the module that is using the packed weights (the quantized op) In the case of fusion we update the scope mapping to store the module path of the fused node. Test Plan: python test/test_quantization.py test_packed_weight_fused_op Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit 3d85bc5 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
Summary: Store the FQN of the module that is using the packed weights (the quantized op) In the case of fusion we update the scope mapping to store the module path of the fused node. Test Plan: python test/test_quantization.py test_packed_weight_fused_op Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D26117964](https://our.internmc.facebook.com/intern/diff/D26117964) [ghstack-poisoned]
Summary: Store the FQN of the module that is using the packed weights (the quantized op) In the case of fusion we update the scope mapping to store the module path of the fused node. Test Plan: python test/test_quantization.py test_packed_weight_fused_op Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 9e8ff959d4b16ef44285005454b6029020e3be00 Pull Request resolved: #51259
Codecov Report
@@ Coverage Diff @@
## gh/supriyar/217/base #51259 +/- ##
=====================================================
Coverage 80.56% 80.57%
=====================================================
Files 1931 1931
Lines 210722 210729 +7
=====================================================
+ Hits 169775 169791 +16
+ Misses 40947 40938 -9 |
'call_function', qconv_op, qconv_args, kwargs) | ||
quantizer.node_name_to_scope[op.name] = quantizer.node_name_to_scope[self.conv_node.name] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you add a comment here? same for other occurrences
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually can you also add a TODO here:
"TODO: may need to change the key to Node regenerate the map in each transformation since we might not be able to rely on the name"
Summary: Store the FQN of the module that is using the packed weights (the quantized op) In the case of fusion we update the scope mapping to store the module path of the fused node. Test Plan: python test/test_quantization.py test_packed_weight_fused_op Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D26117964](https://our.internmc.facebook.com/intern/diff/D26117964) [ghstack-poisoned]
Summary: Store the FQN of the module that is using the packed weights (the quantized op) In the case of fusion we update the scope mapping to store the module path of the fused node. Test Plan: python test/test_quantization.py test_packed_weight_fused_op Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D26117964](https://our.internmc.facebook.com/intern/diff/D26117964) [ghstack-poisoned]
Summary: Store the FQN of the module that is using the packed weights (the quantized op) In the case of fusion we update the scope mapping to store the module path of the fused node. Test Plan: python test/test_quantization.py test_packed_weight_fused_op Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: ef2ad948fe8e5c58067004ec93de01d9fe4ce688 Pull Request resolved: #51259
This pull request has been merged in 916af89. |
Stack from ghstack:
Summary:
Store the FQN of the module that is using the packed weights (the quantized op)
In the case of fusion we update the scope mapping to store the module path of the fused node.
Test Plan:
python test/test_quantization.py test_packed_weight_fused_op
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D26117964