Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[quant] Backend string for the quantized types #49965

Closed
wants to merge 2 commits into from

Conversation

z-a-f
Copy link
Contributor

@z-a-f z-a-f commented Dec 30, 2020

Stack from ghstack:

Without this checking the type of the quantized tensor using type would throw an error.

After this PR running the type(qx), where qx is a quantized tensor would show something like torch.quantized.QUInt8.

Test Plan: Not needed -- this is just a string description for the quantized tensors

Differential Revision: D25731594

Without this checking the type of the quantized tensor using `type` would throw an error.

Test Plan: Not needed -- this is just a string description for the quantized tensors

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 30, 2020

💊 CI failures summary and remediations

As of commit c42822e (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



1 job timed out:

  • pytorch_linux_bionic_py3_8_gcc9_coverage_test1

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 5 times.

Without this checking the type of the quantized tensor using `type` would throw an error.

After this PR running the `type(qx)`, where `qx` is a quantized tensor would show something like `torch.quantized.QUInt8`.

Test Plan: Not needed -- this is just a string description for the quantized tensors

Differential Revision: [D25731594](https://our.internmc.facebook.com/intern/diff/D25731594)

[ghstack-poisoned]
z-a-f added a commit that referenced this pull request Dec 30, 2020
Without this checking the type of the quantized tensor using `type` would throw an error.

After this PR running the `type(qx)`, where `qx` is a quantized tensor would show something like `torch.quantized.QUInt8`.

Test Plan: Not needed -- this is just a string description for the quantized tensors

ghstack-source-id: 3599269f5ceca49983676037c85a0b73a1030ecd
Pull Request resolved: #49965
@jerryzh168
Copy link
Contributor

currently we put qint8, quint8 and qint32 in torch namespace, do we need to change that?

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need QuantizedCUDA too?

@z-a-f
Copy link
Contributor Author

z-a-f commented Jan 5, 2021

currently we put qint8, quint8 and qint32 in torch namespace, do we need to change that?

When we call x.type() it shows torch.IntTensor, while x.cuda().type() would show torch.cuda.IntTensor. I think the whole consensus is to have a backend prefix between torch and the tensor type. In fact, I think we should follow the pattern and show something like torch.quantized.UInt8Tensor instead of torch.quantized.QUInt8. @jerryzh168 What do you think?

@z-a-f
Copy link
Contributor Author

z-a-f commented Jan 5, 2021

Do you need QuantizedCUDA too?

How would the string look like? Something like torch.cuda.quantized or torch.quantized.cuda?

@ezyang
Copy link
Contributor

ezyang commented Jan 6, 2021

IDK, but the answer to that question is wherever you actually put the Python modules for it.

@jerryzh168
Copy link
Contributor

currently we put qint8, quint8 and qint32 in torch namespace, do we need to change that?

When we call x.type() it shows torch.IntTensor, while x.cuda().type() would show torch.cuda.IntTensor. I think the whole consensus is to have a backend prefix between torch and the tensor type. In fact, I think we should follow the pattern and show something like torch.quantized.UInt8Tensor instead of torch.quantized.QUInt8. @jerryzh168 What do you think?

sounds good. feel free to add the change to align quantized types with other types

@facebook-github-bot
Copy link
Contributor

@z-a-f merged this pull request in 7ce8f7e.

@facebook-github-bot facebook-github-bot deleted the gh/z-a-f/95/head branch January 11, 2021 15:17
hwangdeyu pushed a commit to hwangdeyu/pytorch that referenced this pull request Jan 14, 2021
Summary:
Pull Request resolved: pytorch#49965

Without this checking the type of the quantized tensor using `type` would throw an error.

After this PR running the `type(qx)`, where `qx` is a quantized tensor would show something like `torch.quantized.QUInt8`.

Test Plan: Not needed -- this is just a string description for the quantized tensors

Differential Revision: D25731594

Reviewed By: ezyang

Pulled By: z-a-f

fbshipit-source-id: 942fdf89a1c50895249989c7203f2e7cc00df4c6
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants