Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[quant] Make PerChannel Observer work with float qparams #42690

Closed
wants to merge 8 commits into from

Conversation

supriyar
Copy link
Contributor

@supriyar supriyar commented Aug 6, 2020

Stack from ghstack:

Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D23070633

Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
supriyar added a commit that referenced this pull request Aug 6, 2020
Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: adf42b3867df1acdc35b696374132d00f6ccdb99
Pull Request resolved: #42690
@dr-ci
Copy link

dr-ci bot commented Aug 6, 2020

💊 CI failures summary and remediations

As of commit 3086331 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 45 times.

Comment on lines 220 to 221
min_val = torch.min(min_val, torch.zeros_like(min_val))
max_val = torch.max(max_val, torch.zeros_like(max_val))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this code was a bit confusing (before this PR). Maybe we can rename these something like min_val_neg and max_val_pos in the rest of the function?

@@ -232,6 +234,11 @@ def _calculate_qparams(self, min_val, max_val):
zero_point = zero_point.new_full(zero_point.size(), (qmin + qmax) // 2)
else:
zero_point = zero_point.new_full(zero_point.size(), 128)
elif self.qscheme == torch.per_channel_affine_float_qparams:
scale = (orig_max - orig_min) / float(qmax - qmin)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ideally this should be max_val - min_val, since that's what is actually happening. The other qschemes are not using observed min and max directly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. Maybe I can rename the other usages so I can use max_val - min_val directly here

Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
supriyar added a commit that referenced this pull request Aug 7, 2020
Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: c4bc6a86b86e7785a8e63bc852b34470af8e7c02
Pull Request resolved: #42690
Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D23070633](https://our.internmc.facebook.com/intern/diff/D23070633)

[ghstack-poisoned]
Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D23070633](https://our.internmc.facebook.com/intern/diff/D23070633)

[ghstack-poisoned]
Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D23070633](https://our.internmc.facebook.com/intern/diff/D23070633)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 816d37b.

MauiDesign pushed a commit to MauiDesign/PyTorchPyTorch that referenced this pull request Aug 16, 2020
Summary:
Add implementation for new qscheme per_channel_affine_float_qparams in observer

Test Plan:
python test/test_quantization.py TestObserver.test_per_channel_observers

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 3788f494fbb596bd3ae5eba76bac4b7da0e6c887
Pull Request resolved: pytorch/pytorch#42690
@facebook-github-bot facebook-github-bot deleted the gh/supriyar/156/head branch August 17, 2020 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants