Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for swap_custom_module_to_observer doing duplicate swaps on the same node.target #91905

Closed

Conversation

harshitkhaitan
Copy link
Contributor

Summary:
This is a fix for the following issue:
"When two nodes in a model have the same dTypes / node.target, the torch quantization prepare_fx flow does not check for duplicates and tries to do a custom module swap twice. When it attempts the swap the same target for a second time, the swap_custom_module_to_observed detects the observed module instead of the float module class on the target, and fails on an assertion. "

The added unit test demonstrates a simple example where it fails in absence of this fix.

Test Plan: buck test mode/dev //caffe2/test:quantization_fx -- --exact 'caffe2/test:quantization_fx - test_custom_module_class_input_has_duplicate_nodes (quantization.fx.test_quantize_fx.TestQuantizeFx)'

Reviewed By: vkuzo

Differential Revision: D42023273

@pytorch-bot pytorch-bot bot added the release notes: quantization release notes category label Jan 9, 2023
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Jan 9, 2023

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: harshitkhaitan / name: Harshit Khaitan (300feb5)

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 9, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91905

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 Failures

As of commit 825737f:

FLAKY - The following jobs failed but were likely due to flakiness present on master:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42023273

harshitkhaitan added a commit to harshitkhaitan/pytorch that referenced this pull request Jan 9, 2023
…ame node.target (pytorch#91905)

Summary:
Pull Request resolved: pytorch#91905

This is a fix for the following issue:
"When two nodes in a model have the same dTypes / node.target, the torch quantization prepare_fx flow does not check for duplicates and tries to do a custom module swap twice. When it attempts the swap the same target for a second time, the swap_custom_module_to_observed detects the observed module instead of the float module class on the target, and fails on an assertion. "

The added unit test demonstrates a simple example where it fails in absence of this fix.

Test Plan: buck test mode/dev //caffe2/test:quantization_fx -- --exact 'caffe2/test:quantization_fx - test_custom_module_class_input_has_duplicate_nodes (quantization.fx.test_quantize_fx.TestQuantizeFx)'

Reviewed By: vkuzo, jerryzh168

Differential Revision: D42023273

fbshipit-source-id: bb072fafa1473fdd5d1f1e8c04abed8f58487006
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42023273

…ame node.target (pytorch#91905)

Summary:
Pull Request resolved: pytorch#91905

This is a fix for the following issue:
"When two nodes in a model have the same dTypes / node.target, the torch quantization prepare_fx flow does not check for duplicates and tries to do a custom module swap twice. When it attempts the swap the same target for a second time, the swap_custom_module_to_observed detects the observed module instead of the float module class on the target, and fails on an assertion. "

The added unit test demonstrates a simple example where it fails in absence of this fix.

Test Plan: buck test mode/dev //caffe2/test:quantization_fx -- --exact 'caffe2/test:quantization_fx - test_custom_module_class_input_has_duplicate_nodes (quantization.fx.test_quantize_fx.TestQuantizeFx)'

Reviewed By: vkuzo, jerryzh168

Differential Revision: D42023273

fbshipit-source-id: 0caa4545860fabbd8d739953cf5ecf8141dfbcce
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42023273

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jan 12, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request fb-exported Merged release notes: quantization release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants