Skip to content

[PT2E][Docs] Calibration is a must for both static and dynamic quantization#4307

Merged
Xia-Weiwen merged 5 commits intopytorch:mainfrom
Xia-Weiwen:fix_dynamic_quant
Apr 29, 2026
Merged

[PT2E][Docs] Calibration is a must for both static and dynamic quantization#4307
Xia-Weiwen merged 5 commits intopytorch:mainfrom
Xia-Weiwen:fix_dynamic_quant

Conversation

@Xia-Weiwen
Copy link
Copy Markdown
Collaborator

@Xia-Weiwen Xia-Weiwen commented Apr 21, 2026

This pull request adds a note in the PT2E docs stating that the calibration is mandatory.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 21, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/4307

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure

As of commit b93240d with merge base 67a78e5 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 21, 2026
@Xia-Weiwen Xia-Weiwen added module: not user facing Use this tag if you don't want this PR to show up in release notes module: pt2e_quant pt2 export quantization (prepare_pt2e, convert_pt2e, quantizer) labels Apr 21, 2026
Comment thread torchao/quantization/pt2e/convert.py Outdated
# if the observer is uninitialized (empty min_val) and its input is a constant
# weight tensor, and run the observer eagerly in that case.
if (
hasattr(activation_post_process, "min_val")
Copy link
Copy Markdown
Contributor

@jerryzh168 jerryzh168 Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this condition is too ad hoc I think, won't work if activation_post_process is more general

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh. It checks if the observer has not been run and its argument is a constant (from getattr). It looks complicated but I did not figure out a better way. Do you have any suggestions 😂 Thanks.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jerryzh168 Do you have any good ideas to simplify this or to fix the issue in another way? Thanks.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the use case for dynamic quantization? why is just going through an example input for dynamic quant flow not good? I think we need example input for torch.export as well anyways

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel we could also just add some asserts saying observer should always run, or just make sure we have proper docs saying calibration step can't be skipped

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jerryzh168 Thanks for the suggestions. We already have a warning here:

warnings.warn(

So, I just add a comment in the docs to say calibration is a must for dynamic quantization. Please take a look.

@Xia-Weiwen Xia-Weiwen added the topic: documentation Use this tag if this PR adds or improves documentation label Apr 24, 2026
@Xia-Weiwen Xia-Weiwen marked this pull request as ready for review April 24, 2026 02:36
@Xia-Weiwen Xia-Weiwen requested a review from jerryzh168 April 24, 2026 02:36
@Xia-Weiwen Xia-Weiwen changed the title [PT2E] Run weight observer eagerly for dynamic quant [PT2E][Docs] Calibration is a must for both static and dynamic quantization Apr 24, 2026
@Xia-Weiwen
Copy link
Copy Markdown
Collaborator Author

Hi @jerryzh168 Could you please review again? Thanks.

Comment thread docs/source/pt2e_quantization/index.rst Outdated
m = prepare_pt2e(m, quantizer)

# run calibration
# calibration is a must for both static and dynamic quantization
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: must run calibration for both static and dynamic quantization

Copy link
Copy Markdown
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, can change the comment to be a bit more natural

@Xia-Weiwen
Copy link
Copy Markdown
Collaborator Author

thanks, can change the comment to be a bit more natural

Updated. Thanks.

@Xia-Weiwen Xia-Weiwen merged commit 5cc2ef9 into pytorch:main Apr 29, 2026
21 of 22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: not user facing Use this tag if you don't want this PR to show up in release notes module: pt2e_quant pt2 export quantization (prepare_pt2e, convert_pt2e, quantizer) topic: documentation Use this tag if this PR adds or improves documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants