quantization: unexpected casting of tensor min and max to int in histogram observer #83672
Labels
oncall: quantization
Quantization support in PyTorch
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃悰 Describe the bug
Originally reported in https://discuss.pytorch.org/t/casting-to-int-of-data-min-and-max-in-histogramobserver/159316
There is code in
HistogramObserver
which casts the tensor min and max to integers before calculating the histogram: https://github.com/pytorch/pytorch/blame/a9ba3fe1dbf2cea45c9a7e723010c27c211f7fe3/torch/ao/quantization/observer.py#L1143. It's unclear on why this is here since we want the histogram bins be as accurate as possible, we should verify if there is a reason for having this and remove it if there isn't a reason.Versions
master
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @jgong5 @Xia-Weiwen @leslie-fang-intel @vkuzo
The text was updated successfully, but these errors were encountered: