Skip to content

Conversation

leimao
Copy link
Contributor

@leimao leimao commented Nov 28, 2020

This fix allows the calibration function to take in more than one positional argument.

@dr-ci
Copy link

dr-ci bot commented Nov 28, 2020

💊 CI failures summary and remediations

As of commit 3c209a3 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 24 times.

@codecov
Copy link

codecov bot commented Nov 29, 2020

Codecov Report

Merging #48537 (3c209a3) into master (272f4db) will decrease coverage by 0.00%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master   #48537      +/-   ##
==========================================
- Coverage   80.93%   80.93%   -0.01%     
==========================================
  Files        1855     1855              
  Lines      200194   200194              
==========================================
- Hits       162024   162020       -4     
- Misses      38170    38174       +4     

@leimao
Copy link
Contributor Author

leimao commented Nov 29, 2020

@jerryzh168

@ejguan ejguan added the oncall: quantization Quantization support in PyTorch label Nov 30, 2020
@zhangguanheng66 zhangguanheng66 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Dec 1, 2020
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, thanks. Please try not to use this api since we might want to deprecate it in the future.
this can be replace by

model = prepare(model), 
calibrate(model, inputs)
model = convet(model)

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jerryzh168 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@leimao
Copy link
Contributor Author

leimao commented Dec 1, 2020

OK, thanks. Please try not to use this api since we might want to deprecate it in the future.
this can be replace by

model = prepare(model), 
calibrate(model, inputs)
model = convet(model)

I found you have a similar high-level API for dynamic quantization as well. Can I assume the torch.quantization.quantize_dynamic API will not likely change?

@jerryzh168
Copy link
Contributor

OK, thanks. Please try not to use this api since we might want to deprecate it in the future.
this can be replace by

model = prepare(model), 
calibrate(model, inputs)
model = convet(model)

I found you have a similar high-level API for dynamic quantization as well. Can I assume the torch.quantization.quantize_dynamic API will not likely change?

we won't deprecate these in short term. but not sure about longer term. in general quantize_dynamic is not needed either. you can use

model = prepare(model, qconfig_dict)
model = convert(model)

as well. a single line api does not buy us much and increases the api surface.

For example, in case of static quantization, the api should also include a key word argument dictionary to be complete (currently we only have positional run_args).

this file: https://github.com/pytorch/pytorch/blob/master/torch/quantization/quantize_fx.py shows the long term apis we are thinking of better. basically just prepare, prepare_qat and convert, and user will decide whether to run calibration/training or not. (we also have some warnings in the observer when calibration/training is not run)

@leimao
Copy link
Contributor Author

leimao commented Dec 2, 2020

@jerryzh168 Thank you. I will try to stay away from the "high-level" wrapper API as much as I can.

@facebook-github-bot
Copy link
Contributor

@jerryzh168 merged this pull request in 0db7346.

shaibagon pushed a commit to shaibagon/pytorch that referenced this pull request Dec 3, 2020
Summary:
This fix allows the calibration function to take in more than one positional argument.

Pull Request resolved: pytorch#48537

Reviewed By: zhangguanheng66

Differential Revision: D25255764

Pulled By: jerryzh168

fbshipit-source-id: 3ce20dbed95fd26664a186bd4a992ab406bba827
@albanD
Copy link
Collaborator

albanD commented Mar 4, 2021

Note that this is a BC-breaking and it breaks the quantization tutorials.

@albanD albanD added the module: bc-breaking Related to a BC-breaking change label Mar 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: bc-breaking Related to a BC-breaking change oncall: quantization Quantization support in PyTorch open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants