-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix for swap_custom_module_to_observer doing duplicate swaps on the same node.target #91905
Conversation
|
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91905
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit 825737f: FLAKY - The following jobs failed but were likely due to flakiness present on master:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D42023273 |
…ame node.target (pytorch#91905) Summary: Pull Request resolved: pytorch#91905 This is a fix for the following issue: "When two nodes in a model have the same dTypes / node.target, the torch quantization prepare_fx flow does not check for duplicates and tries to do a custom module swap twice. When it attempts the swap the same target for a second time, the swap_custom_module_to_observed detects the observed module instead of the float module class on the target, and fails on an assertion. " The added unit test demonstrates a simple example where it fails in absence of this fix. Test Plan: buck test mode/dev //caffe2/test:quantization_fx -- --exact 'caffe2/test:quantization_fx - test_custom_module_class_input_has_duplicate_nodes (quantization.fx.test_quantize_fx.TestQuantizeFx)' Reviewed By: vkuzo, jerryzh168 Differential Revision: D42023273 fbshipit-source-id: bb072fafa1473fdd5d1f1e8c04abed8f58487006
300feb5
to
632d9d1
Compare
This pull request was exported from Phabricator. Differential Revision: D42023273 |
…ame node.target (pytorch#91905) Summary: Pull Request resolved: pytorch#91905 This is a fix for the following issue: "When two nodes in a model have the same dTypes / node.target, the torch quantization prepare_fx flow does not check for duplicates and tries to do a custom module swap twice. When it attempts the swap the same target for a second time, the swap_custom_module_to_observed detects the observed module instead of the float module class on the target, and fails on an assertion. " The added unit test demonstrates a simple example where it fails in absence of this fix. Test Plan: buck test mode/dev //caffe2/test:quantization_fx -- --exact 'caffe2/test:quantization_fx - test_custom_module_class_input_has_duplicate_nodes (quantization.fx.test_quantize_fx.TestQuantizeFx)' Reviewed By: vkuzo, jerryzh168 Differential Revision: D42023273 fbshipit-source-id: 0caa4545860fabbd8d739953cf5ecf8141dfbcce
This pull request was exported from Phabricator. Differential Revision: D42023273 |
632d9d1
to
825737f
Compare
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary:
This is a fix for the following issue:
"When two nodes in a model have the same dTypes / node.target, the torch quantization prepare_fx flow does not check for duplicates and tries to do a custom module swap twice. When it attempts the swap the same target for a second time, the swap_custom_module_to_observed detects the observed module instead of the float module class on the target, and fails on an assertion. "
The added unit test demonstrates a simple example where it fails in absence of this fix.
Test Plan: buck test mode/dev //caffe2/test:quantization_fx -- --exact 'caffe2/test:quantization_fx - test_custom_module_class_input_has_duplicate_nodes (quantization.fx.test_quantize_fx.TestQuantizeFx)'
Reviewed By: vkuzo
Differential Revision: D42023273