-
Notifications
You must be signed in to change notification settings - Fork 25.5k
[quant][pyper] Set sparse to False for embedding_bag ops in graph mode #45997
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: The current sparse field using in the float module is for sparse gradients, which is not applicable to inference. The sparse field in the quantizd ops denotes pruned weights. Test Plan: python test/test_quantization.py TestQuantizeDynamicJitOps.test_embedding_bag Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: The current sparse field using in the float module is for sparse gradients, which is not applicable to inference. The sparse field in the quantizd ops denotes pruned weights. Test Plan: python test/test_quantization.py TestQuantizeDynamicJitOps.test_embedding_bag Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 221161b Pull Request resolved: #45997
…n graph mode" Summary: The current sparse field using in the float module is for sparse gradients, which is not applicable to inference. The sparse field in the quantizd ops denotes pruned weights. Test Plan: python test/test_quantization.py TestQuantizeDynamicJitOps.test_embedding_bag Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D24176543](https://our.internmc.facebook.com/intern/diff/D24176543) [ghstack-poisoned]
Codecov Report
@@ Coverage Diff @@
## gh/supriyar/195/base #45997 +/- ##
=====================================================
Coverage 68.28% 68.28%
=====================================================
Files 410 410
Lines 53306 53306
=====================================================
Hits 36398 36398
Misses 16908 16908 Continue to review full report at Codecov.
|
…n graph mode" Summary: The current sparse field using in the float module is for sparse gradients, which is not applicable to inference. The sparse field in the quantizd ops denotes pruned weights. Test Plan: python test/test_quantization.py TestQuantizeDynamicJitOps.test_embedding_bag Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D24176543](https://our.internmc.facebook.com/intern/diff/D24176543) [ghstack-poisoned]
for (auto i = 1; i < inputs_size - 1; ++i) { | ||
qembedding_bag_inputs.push_back(embedding_bag_inputs[i]); | ||
} | ||
// Set the sparse field to 0 for inference. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
optional: should we mention that this is for sparse gradients (as the pr summary reads), just to make it clear on why?
…n graph mode" Summary: The current sparse field using in the float module is for sparse gradients, which is not applicable to inference. The sparse field in the quantizd ops denotes pruned weights. Test Plan: python test/test_quantization.py TestQuantizeDynamicJitOps.test_embedding_bag Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D24176543](https://our.internmc.facebook.com/intern/diff/D24176543) [ghstack-poisoned]
This pull request has been merged in 8c80ee8. |
Stack from ghstack:
Summary:
The current sparse field using in the float module is for sparse gradients, which is not applicable
to inference. The sparse field in the quantizd ops denotes pruned weights.
Test Plan:
python test/test_quantization.py TestQuantizeDynamicJitOps.test_embedding_bag
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D24176543