Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

quantization: unexpected casting of tensor min and max to int in histogram observer #83672

Open
vkuzo opened this issue Aug 18, 2022 · 3 comments
Assignees
Labels
oncall: quantization Quantization support in PyTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@vkuzo
Copy link
Contributor

vkuzo commented Aug 18, 2022

馃悰 Describe the bug

Originally reported in https://discuss.pytorch.org/t/casting-to-int-of-data-min-and-max-in-histogramobserver/159316

There is code in HistogramObserver which casts the tensor min and max to integers before calculating the histogram: https://github.com/pytorch/pytorch/blame/a9ba3fe1dbf2cea45c9a7e723010c27c211f7fe3/torch/ao/quantization/observer.py#L1143. It's unclear on why this is here since we want the histogram bins be as accurate as possible, we should verify if there is a reason for having this and remove it if there isn't a reason.

Versions

master

cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @jgong5 @Xia-Weiwen @leslie-fang-intel @vkuzo

@vkuzo vkuzo added the oncall: quantization Quantization support in PyTorch label Aug 18, 2022
@vkuzo vkuzo self-assigned this Aug 19, 2022
@vkuzo
Copy link
Contributor Author

vkuzo commented Aug 19, 2022

looks like this was added by #45630 by mistake, so this is a bug

@vkuzo
Copy link
Contributor Author

vkuzo commented Aug 19, 2022

#83755 will address this

@andrewor14 andrewor14 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Aug 10, 2023
@andrewor14
Copy link
Contributor

@vkuzo Is this fixed? Should we close it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: quantization Quantization support in PyTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

2 participants