Add a16w8 per-op test for mean_dim (#19594)#19594
Conversation
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.exp` on Ethos-U55 and Ethos-U85. ## Context The `exp` op is part of the softmax decomposition (`softmax(x) = exp(x) / sum(exp(x))`), which is used in the attention mechanism of EMG2Pose Conformer models. This op was identified as the root cause of the U85 SNR regression investigated in SEV T267939669 — without dedicated a16w8 per-op coverage, the numerics issue was only visible at the full-model level. Adding per-op tests allows us to catch int16 precision regressions at the operator granularity before they propagate to end-to-end model accuracy. ## Changes - Add `a16w8_exp_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors - Add `test_exp_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_exp_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_exp.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532358
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.reciprocal` on Ethos-U55 and Ethos-U85. ## Context The `reciprocal` op is the second half of the softmax decomposition (`softmax(x) = exp(x) * reciprocal(sum(exp(x)))`), paired with `exp`. Together they form the attention mechanism in EMG2Pose Conformer models. Like `exp`, this op was implicated in the U85 SNR regression (SEV T267939669) — the division-by-reciprocal path can amplify quantization error when the denominator is itself quantized at int16. Adding dedicated a16w8 coverage isolates reciprocal numerics from the rest of the softmax pipeline. ## Changes - Add `a16w8_reciprocal_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors (all shifted by +0.1 to avoid division near zero) - Add `test_reciprocal_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_reciprocal_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_reciprocal.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532357
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19594
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 1 New FailureAs of commit cccb394 with merge base 58b4f26 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
|
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532361. |
db30c66 to
744487a
Compare
This PR needs a
|
744487a to
ce874b4
Compare
744487a to
b79321f
Compare
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4 - Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532361
e7d90e2 to
37c98c1
Compare
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4 - Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532361
b79321f to
3350584
Compare
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532361. |
Summary: Pull Request resolved: pytorch#19594 Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4 - Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532361
7bb5c56 to
d8dea29
Compare
d8dea29 to
0733afe
Compare
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4 - Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532361
Summary: Pull Request resolved: pytorch#19594 Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4 - Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532361
0733afe to
cccb394
Compare
Summary:
Add int16 activation / int8 weight (a16w8) quantization tests for
aten.mean.dimon Ethos-U55 and Ethos-U85.Changes
a16w8_mean_test_parametersdict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4test_mean_dim_a16w8_u55_INTusingEthosU55PipelineINTwitha16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16test_mean_dim_a16w8_u85_INTusingEthosU85PipelineINTwith same kwargsops/test_mean_dim.pyinfbcode/andxplat/targets.bzlbypass-pytorch-oss-checks
Differential Revision: D104532361