New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Quant] Use input_qspec_map for weight quantization of linear #107105
Conversation
Summary: In prepararation for metadata porting diff, it is required that weight quant annotation happens via edge quantization, i.e. input_qspec_map. Reason: Metadata is ported via associating DQ node's metadata with its consumer while associating Q node's metadata with its producer. Furthermore, such porting must be qualified via user intent to see if the consumder of DQ, or producer of Q, actually specified intent of quantization By making quantization annotation on linear node's weight via input_qspec_map, we can enable associating DQ of [weight -> Q -> DQ], with the linear module. Test Plan: CI Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/107105
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit be62222 with merge base 0f1a225 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we were doing this in the beginning..
yeah this ended up falling out of linear for some reason and I didnt realize it. But generally I agree with you. This is the right pattern when possible. |
…ear" Summary: In prepararation for metadata porting diff, it is required that weight quant annotation happens via edge quantization, i.e. input_qspec_map. Reason: Metadata is ported via associating DQ node's metadata with its consumer while associating Q node's metadata with its producer. Furthermore, such porting must be qualified via user intent to see if the consumder of DQ, or producer of Q, actually specified intent of quantization By making quantization annotation on linear node's weight via input_qspec_map, we can enable associating DQ of [weight -> Q -> DQ], with the linear module. Test Plan: CI Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
…ear" Summary: In prepararation for metadata porting diff, it is required that weight quant annotation happens via edge quantization, i.e. input_qspec_map. Reason: Metadata is ported via associating DQ node's metadata with its consumer while associating Q node's metadata with its producer. Furthermore, such porting must be qualified via user intent to see if the consumder of DQ, or producer of Q, actually specified intent of quantization By making quantization annotation on linear node's weight via input_qspec_map, we can enable associating DQ of [weight -> Q -> DQ], with the linear module. Test Plan: CI Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D48488414](https://our.internmc.facebook.com/intern/diff/D48488414) [ghstack-poisoned]
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
…ear" Summary: In prepararation for metadata porting diff, it is required that weight quant annotation happens via edge quantization, i.e. input_qspec_map. Reason: Metadata is ported via associating DQ node's metadata with its consumer while associating Q node's metadata with its producer. Furthermore, such porting must be qualified via user intent to see if the consumder of DQ, or producer of Q, actually specified intent of quantization By making quantization annotation on linear node's weight via input_qspec_map, we can enable associating DQ of [weight -> Q -> DQ], with the linear module. Test Plan: CI Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D48488414](https://our.internmc.facebook.com/intern/diff/D48488414) [ghstack-poisoned]
…ear" Summary: In prepararation for metadata porting diff, it is required that weight quant annotation happens via edge quantization, i.e. input_qspec_map. Reason: Metadata is ported via associating DQ node's metadata with its consumer while associating Q node's metadata with its producer. Furthermore, such porting must be qualified via user intent to see if the consumder of DQ, or producer of Q, actually specified intent of quantization By making quantization annotation on linear node's weight via input_qspec_map, we can enable associating DQ of [weight -> Q -> DQ], with the linear module. Test Plan: CI Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D48488414](https://our.internmc.facebook.com/intern/diff/D48488414) [ghstack-poisoned]
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
3 similar comments
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
…ear" Summary: In prepararation for metadata porting diff, it is required that weight quant annotation happens via edge quantization, i.e. input_qspec_map. Reason: Metadata is ported via associating DQ node's metadata with its consumer while associating Q node's metadata with its producer. Furthermore, such porting must be qualified via user intent to see if the consumder of DQ, or producer of Q, actually specified intent of quantization By making quantization annotation on linear node's weight via input_qspec_map, we can enable associating DQ of [weight -> Q -> DQ], with the linear module. Test Plan: CI Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D48488414](https://our.internmc.facebook.com/intern/diff/D48488414) [ghstack-poisoned]
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
1 similar comment
@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…107106) Summary: Having annotation functions return nodes that are annotated is useful specifically for adding "quantization_tag" to those nodes Test Plan: CI Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D48488415](https://our.internmc.facebook.com/intern/diff/D48488415) Pull Request resolved: #107106 Approved by: https://github.com/jerryzh168 ghstack dependencies: #107105
Summary: When two layers are quantized differently, observer map update updates map for key (observed_node, node), whereas it should really be (original_input, node) Test Plan: Test in the next diff adds a test where it otherwise fails Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D48663145](https://our.internmc.facebook.com/intern/diff/D48663145) Pull Request resolved: #107899 Approved by: https://github.com/jerryzh168 ghstack dependencies: #107105, #107106
Summary: During convert step observers are first replaced by Q-DQ pair. In some scenarios like following output DQ has a fan out. ---> OP2 -> Q -> DQ / OP -> Q -> DQ - \ ---> OP3 -> Q -> DQ If either op OP2 or OP3 are configured to be quantized, then the input is expected to quantized. In this case quantized equivalent of some pattern, that quantizer asked to be quantized, should look like: [DQ -> {pattern} -> Q]. However, in scenario like above where DQ node is shared between multiple "quantized" patterns, boundary of "quantized" pattern is not clear because DQ now belongs to multiple quantized patterns. This poses challenge for: - Porting metadata: which "quantized" partition this DQ node belongs - Quantized representation, equivalently, needs to identify self-contained quantized pattern that is replaced by its equivalent pattern that captures compute in the quantized precision. Test Plan: test_duplicate_dq_pass Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D48663147](https://our.internmc.facebook.com/intern/diff/D48663147) Pull Request resolved: #107900 Approved by: https://github.com/jerryzh168, https://github.com/andrewor14, https://github.com/leslie-fang-intel ghstack dependencies: #107105, #107106, #107899
…107107) Summary: This diff adds adding metadata to q-dq nodes by inferring the quatization intent from node annotations. Annotations on the node are way for user to specify how a node or subgraph is supposed to be quantized. We continue to use that information to copy metadata on Q/DQ node from appropriate nodes. Test Plan: Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D48488416](https://our.internmc.facebook.com/intern/diff/D48488416) Pull Request resolved: #107107 Approved by: https://github.com/jerryzh168 ghstack dependencies: #107105, #107106, #107899, #107900
Stack from ghstack (oldest at bottom):
Summary:
In prepararation for metadata porting diff, it is required that weight
quant annotation happens via edge quantization, i.e. input_qspec_map.
Reason: Metadata is ported via associating DQ node's metadata with its
consumer while associating Q node's metadata with its producer.
Furthermore, such porting must be qualified via user intent to see if
the consumder of DQ, or producer of Q, actually specified intent of
quantization
By making quantization annotation on linear node's weight via
input_qspec_map, we can enable associating DQ of [weight -> Q -> DQ],
with the linear module.
Test Plan:
CI
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D48488414