New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[quant] Backend string for the quantized types #49965
Conversation
Without this checking the type of the quantized tensor using `type` would throw an error. Test Plan: Not needed -- this is just a string description for the quantized tensors [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit c42822e (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
1 job timed out:
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
Without this checking the type of the quantized tensor using `type` would throw an error. After this PR running the `type(qx)`, where `qx` is a quantized tensor would show something like `torch.quantized.QUInt8`. Test Plan: Not needed -- this is just a string description for the quantized tensors Differential Revision: [D25731594](https://our.internmc.facebook.com/intern/diff/D25731594) [ghstack-poisoned]
Without this checking the type of the quantized tensor using `type` would throw an error. After this PR running the `type(qx)`, where `qx` is a quantized tensor would show something like `torch.quantized.QUInt8`. Test Plan: Not needed -- this is just a string description for the quantized tensors ghstack-source-id: 3599269f5ceca49983676037c85a0b73a1030ecd Pull Request resolved: #49965
currently we put qint8, quint8 and qint32 in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need QuantizedCUDA too?
When we call |
How would the string look like? Something like |
IDK, but the answer to that question is wherever you actually put the Python modules for it. |
sounds good. feel free to add the change to align quantized types with other types |
Summary: Pull Request resolved: pytorch#49965 Without this checking the type of the quantized tensor using `type` would throw an error. After this PR running the `type(qx)`, where `qx` is a quantized tensor would show something like `torch.quantized.QUInt8`. Test Plan: Not needed -- this is just a string description for the quantized tensors Differential Revision: D25731594 Reviewed By: ezyang Pulled By: z-a-f fbshipit-source-id: 942fdf89a1c50895249989c7203f2e7cc00df4c6
Stack from ghstack:
Without this checking the type of the quantized tensor using
type
would throw an error.After this PR running the
type(qx)
, whereqx
is a quantized tensor would show something liketorch.quantized.QUInt8
.Test Plan: Not needed -- this is just a string description for the quantized tensors
Differential Revision: D25731594