Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow per-channel QTensor accept any floating type for scales #26676

Closed
wants to merge 2 commits into from

Conversation

dzhulgakov
Copy link
Collaborator

@dzhulgakov dzhulgakov commented Sep 23, 2019

Stack from ghstack:

Just makes it more user-friendly to be able to pass any floating point or int point values to scales or zero_points for per-channel quantization. It matches behavior or per tensor quantizer where those arguments are scalars (not tensors) and thus automatic casting is applied.

Differential Revision: D17537051

@pytorchbot pytorchbot added module: operators oncall: quantization Quantization support in PyTorch labels Sep 23, 2019
dzhulgakov pushed a commit that referenced this pull request Sep 23, 2019
ghstack-source-id: ce907389799e9e68755d602accf426e2b4602a8f
Pull Request resolved: #26676
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

…les"


Just makes it more user-friendly to be able to pass any floating point or int point values to scales or zero_points for per-channel quantization. It matches behavior or per tensor quantizer where those arguments are scalars (not tensors) and thus automatic casting is applied.

Differential Revision: [D17537051](https://our.internmc.facebook.com/intern/diff/D17537051)

[ghstack-poisoned]
dzhulgakov pushed a commit that referenced this pull request Sep 24, 2019
ghstack-source-id: 180dd4918ed33d0b776e4ea34e7b1e2a3896600d
Pull Request resolved: #26676
zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 24, 2019
Summary:
Pull Request resolved: pytorch/pytorch#26676

Just makes it more user-friendly to be able to pass any floating point or int point values to scales or zero_points for per-channel quantization. It matches behavior or per tensor quantizer where those arguments are scalars (not tensors) and thus automatic casting is applied.

Test Plan: Imported from OSS

Differential Revision: D17537051

Pulled By: dzhulgakov

fbshipit-source-id: e955ccdb5b4691828a559dc8f1ed7de54b6d12c4
@facebook-github-bot
Copy link
Contributor

@dzhulgakov merged this pull request in ade60f8.

@facebook-github-bot facebook-github-bot deleted the gh/dzhulgakov/9/head branch October 28, 2019 22:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged oncall: quantization Quantization support in PyTorch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants