Skip to content

Conversation

@vspenubarthi
Copy link
Contributor

@vspenubarthi vspenubarthi commented Aug 18, 2022

Stack from ghstack (oldest at bottom):

Summary: This adds the capability to generate a QConfigMapping based on
the suggestions of the ModelReport API for the user to use. The only
dependency of this feature is that the calibration is run before the
generation of the QConfigMapping and there is no dependency on the
report generation other than that the observers cannot be removed before
this is called. This maps module fqns to EqualizationQConfigs instead of regular
QConfigs.

Example Usage (after callibration):

quantization_mapping = mod_report.generate_qconfig_mapping()
equalization_mapping = mod_report.generate_equalization_mapping()

prepared_model = quantize_fx.prepare_fx(model, mapping, example_input, _equalization_config=equalization_mapping)

quantized_model = quantize_fx.convert_fx(prepared)

This was tested by ensuring that the suggestions generated in the
QConfigMapping are:
1. Correct according to the set backend and data passed through
2. Able to be prepared and converted as a proper config (is a valid
config)
The test for this is a part of the TestFxModelReportClass test suite.

Test Plan: python test/test_quantization.py TestFxModelReportClass.test_equalization_mapping_generation

Reviewers:

Subscribers:

Tasks:

Tags:

Summary: This adds the capability to generate a QConfigMapping based on
the suggestions of the ModelReport API for the user to use. The only
dependency of this feature is that the calibration is run before the
generation of the QConfigMapping and there is no dependency on the
report generation other than that the observers cannot be removed before
this is called. This maps module fqns to EqualizationQConfigs instead of regular
QConfigs.

Example Usage (after callibration):

```
quantization_mapping = mod_report.generate_qconfig_mapping()
equalization_mapping = mod_report.generate_equalization_mapping()

prepared_model = quantize_fx.prepare_fx(model, mapping, example_input, _equalization_config=equalization_mapping)

quantized_model = quantize_fx.convert_fx(prepared)
```

This was tested by ensuring that the suggestions generated in the
QConfigMapping are:
	1.	Correct according to the set backend and data passed through
	2.	Able to be prepared and converted as a proper config (is a valid
config)
The test for this is a part of the TestFxModelReportClass test suite.

Test Plan: python test/test_quantization.py TestFxModelReportClass.test_equalization_mapping_generation

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 18, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit 0122b5b (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

…class"

Summary: This adds the capability to generate a QConfigMapping based on
the suggestions of the ModelReport API for the user to use. The only
dependency of this feature is that the calibration is run before the
generation of the QConfigMapping and there is no dependency on the
report generation other than that the observers cannot be removed before
this is called. This maps module fqns to EqualizationQConfigs instead of regular
QConfigs.

Example Usage (after callibration):

```
quantization_mapping = mod_report.generate_qconfig_mapping()
equalization_mapping = mod_report.generate_equalization_mapping()

prepared_model = quantize_fx.prepare_fx(model, mapping, example_input, _equalization_config=equalization_mapping)

quantized_model = quantize_fx.convert_fx(prepared)
```

This was tested by ensuring that the suggestions generated in the
QConfigMapping are:
	1.	Correct according to the set backend and data passed through
	2.	Able to be prepared and converted as a proper config (is a valid
config)
The test for this is a part of the TestFxModelReportClass test suite.

Test Plan: python test/test_quantization.py TestFxModelReportClass.test_equalization_mapping_generation

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
…class"

Summary: This adds the capability to generate a QConfigMapping based on
the suggestions of the ModelReport API for the user to use. The only
dependency of this feature is that the calibration is run before the
generation of the QConfigMapping and there is no dependency on the
report generation other than that the observers cannot be removed before
this is called. This maps module fqns to EqualizationQConfigs instead of regular
QConfigs.

Example Usage (after callibration):

```
quantization_mapping = mod_report.generate_qconfig_mapping()
equalization_mapping = mod_report.generate_equalization_mapping()

prepared_model = quantize_fx.prepare_fx(model, mapping, example_input, _equalization_config=equalization_mapping)

quantized_model = quantize_fx.convert_fx(prepared)
```

This was tested by ensuring that the suggestions generated in the
QConfigMapping are:
	1.	Correct according to the set backend and data passed through
	2.	Able to be prepared and converted as a proper config (is a valid
config)
The test for this is a part of the TestFxModelReportClass test suite.

Test Plan: python test/test_quantization.py TestFxModelReportClass.test_equalization_mapping_generation

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good

# return the generated mapping
return mapping

def _quantization_config_generator(self, detector_qconfig_info: DetectorQConfigInfo, module: torch.nn.Module) -> QConfig:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel ideally we'd like to remove these one-line functions

Copy link
Contributor

@jerryzh168 jerryzh168 Aug 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that helper functions improves code reuse, but creating too many helper functions will also make the code less readable

@vspenubarthi
Copy link
Contributor Author

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the green (-g) flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@pytorchmergebot
Copy link
Collaborator

Merge failed
Reason: Command git -C /home/runner/work/pytorch/pytorch cherry-pick -x f2b1cc88a0b27740bd33cb37ea29cc4919ad6977 returned non-zero exit code 1

Auto-merging test/quantization/fx/test_model_report_fx.py
Auto-merging torch/ao/quantization/fx/_model_report/detector.py
CONFLICT (content): Merge conflict in torch/ao/quantization/fx/_model_report/detector.py
Auto-merging torch/ao/quantization/fx/_model_report/model_report.py
CONFLICT (content): Merge conflict in torch/ao/quantization/fx/_model_report/model_report.py
error: could not apply f2b1cc88a0... [ao] Added Equalization QConfig generation to ModelReport class
hint: After resolving the conflicts, mark them with
hint: "git add/rm <pathspec>", then run
hint: "git cherry-pick --continue".
hint: You can instead skip this commit with "git cherry-pick --skip".
hint: To abort and get back to the state before "git cherry-pick",
hint: run "git cherry-pick --abort".

Raised by https://github.com/pytorch/pytorch/actions/runs/2885197830

…class"

Summary: This adds the capability to generate a QConfigMapping based on
the suggestions of the ModelReport API for the user to use. The only
dependency of this feature is that the calibration is run before the
generation of the QConfigMapping and there is no dependency on the
report generation other than that the observers cannot be removed before
this is called. This maps module fqns to EqualizationQConfigs instead of regular
QConfigs.

Example Usage (after callibration):

```
quantization_mapping = mod_report.generate_qconfig_mapping()
equalization_mapping = mod_report.generate_equalization_mapping()

prepared_model = quantize_fx.prepare_fx(model, mapping, example_input, _equalization_config=equalization_mapping)

quantized_model = quantize_fx.convert_fx(prepared)
```

This was tested by ensuring that the suggestions generated in the
QConfigMapping are:
	1.	Correct according to the set backend and data passed through
	2.	Able to be prepared and converted as a proper config (is a valid
config)
The test for this is a part of the TestFxModelReportClass test suite.

Test Plan: python test/test_quantization.py TestFxModelReportClass.test_equalization_mapping_generation

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
vspenubarthi added a commit that referenced this pull request Aug 18, 2022
Summary: This adds the capability to generate a QConfigMapping based on
the suggestions of the ModelReport API for the user to use. The only
dependency of this feature is that the calibration is run before the
generation of the QConfigMapping and there is no dependency on the
report generation other than that the observers cannot be removed before
this is called. This maps module fqns to EqualizationQConfigs instead of regular
QConfigs.

Example Usage (after callibration):

```
quantization_mapping = mod_report.generate_qconfig_mapping()
equalization_mapping = mod_report.generate_equalization_mapping()

prepared_model = quantize_fx.prepare_fx(model, mapping, example_input, _equalization_config=equalization_mapping)

quantized_model = quantize_fx.convert_fx(prepared)
```

This was tested by ensuring that the suggestions generated in the
QConfigMapping are:
	1.	Correct according to the set backend and data passed through
	2.	Able to be prepared and converted as a proper config (is a valid
config)
The test for this is a part of the TestFxModelReportClass test suite.

Test Plan: python test/test_quantization.py TestFxModelReportClass.test_equalization_mapping_generation

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 5d4c9dc
Pull Request resolved: #83698
@vspenubarthi
Copy link
Contributor Author

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the green (-g) flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

facebook-github-bot pushed a commit that referenced this pull request Aug 19, 2022
…) (#83698)

Summary:
This adds the capability to generate a QConfigMapping based on
the suggestions of the ModelReport API for the user to use. The only
dependency of this feature is that the calibration is run before the
generation of the QConfigMapping and there is no dependency on the
report generation other than that the observers cannot be removed before
this is called. This maps module fqns to EqualizationQConfigs instead of regular
QConfigs.

Example Usage (after callibration):

```
quantization_mapping = mod_report.generate_qconfig_mapping()
equalization_mapping = mod_report.generate_equalization_mapping()

prepared_model = quantize_fx.prepare_fx(model, mapping, example_input, _equalization_config=equalization_mapping)

quantized_model = quantize_fx.convert_fx(prepared)
```

This was tested by ensuring that the suggestions generated in the
QConfigMapping are:
	1.	Correct according to the set backend and data passed through
	2.	Able to be prepared and converted as a proper config (is a valid
config)
The test for this is a part of the TestFxModelReportClass test suite.

Pull Request resolved: #83698
Approved by: https://github.com/jerryzh168

Test Plan:
contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/88e0165d085166ce13ef443991eea003ee86869e

Test plan from GitHub:
python test/test_quantization.py TestFxModelReportClass.test_equalization_mapping_generation

Reviewed By: atalman

Differential Revision: D38853641

Pulled By: vspenubarthi

fbshipit-source-id: a7f365d62e1e0fbb74b051ef2d3e1bc4b8cf04e2
@facebook-github-bot facebook-github-bot deleted the gh/vspenubarthi/16/head branch August 22, 2022 14:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants