-
Notifications
You must be signed in to change notification settings - Fork 684
[ET-VK] Statically quantized convolutions #14647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
## Changes This diff adds implementations for quantized convolution under the following quantization conditions: * activations statically quantized to 8-bit with per tensor scale and zero point * weights quantized to 8-bit with per channel scales * outputs statically quantized to 8-bit with per tensor scale and zero point 3 different implementations are added, which are selected between based on the input conditions. The first is an direct convolution shader which uses the quantized int8 input directly. The second is an im2col variant, which computes the convolution via a gemm like algorithm by first applying an im2col tranformation on the input tensor. Finally, a specialized implementation is added for depthwise convolutions. Differential Revision: [D83437827](https://our.internmc.facebook.com/intern/diff/D83437827/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14647
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 2 Cancelled JobsAs of commit e24e6ee with merge base 049c9fc ( NEW FAILURES - The following jobs have failed:
CANCELLED JOBS - The following jobs were cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
## Changes This diff adds implementations for quantized convolution under the following quantization conditions: * activations statically quantized to 8-bit with per tensor scale and zero point * weights quantized to 8-bit with per channel scales * outputs statically quantized to 8-bit with per tensor scale and zero point 3 different implementations are added, which are selected between based on the input conditions. The first is an direct convolution shader which uses the quantized int8 input directly. The second is an im2col variant, which computes the convolution via a gemm like algorithm by first applying an im2col tranformation on the input tensor. Finally, a specialized implementation is added for depthwise convolutions. Differential Revision: [D83437827](https://our.internmc.facebook.com/intern/diff/D83437827/) [ghstack-poisoned]
## Changes This diff adds implementations for quantized convolution under the following quantization conditions: * activations statically quantized to 8-bit with per tensor scale and zero point * weights quantized to 8-bit with per channel scales * outputs statically quantized to 8-bit with per tensor scale and zero point 3 different implementations are added, which are selected between based on the input conditions. The first is an direct convolution shader which uses the quantized int8 input directly. The second is an im2col variant, which computes the convolution via a gemm like algorithm by first applying an im2col tranformation on the input tensor. Finally, a specialized implementation is added for depthwise convolutions. Differential Revision: [D83437827](https://our.internmc.facebook.com/intern/diff/D83437827/) [ghstack-poisoned]
32b94aa
into
gh/SS-JIA/332/base
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #14647 by @SS-JIA ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/332/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/332/head Merge bot PR base: https://github.com/pytorch/executorch/tree/main Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/332/orig Differential Revision: [D83437827](https://our.internmc.facebook.com/intern/diff/D83437827/) @diff-train-skip-merge Co-authored-by: ssjia <ssjia@devvm26340.ftw0.facebook.com>
Stack from ghstack (oldest at bottom):
Changes
This diff adds implementations for quantized convolution under the following quantization conditions:
3 different implementations are added, which are selected between based on the input conditions.
The first is an direct convolution shader which uses the quantized int8 input directly.
The second is an im2col variant, which computes the convolution via a gemm like algorithm by first applying an im2col tranformation on the input tensor.
Finally, a specialized implementation is added for depthwise convolutions.
Differential Revision: D83437827