Add a16w8 per-op test for exp (#19591)#19591
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19591
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (1 Unrelated Failure)As of commit 13a20cf with merge base 58b4f26 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532358. |
|
|
343128e to
a7fd432
Compare
This PR needs a
|
eaa5f44 to
77603b2
Compare
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.exp` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_exp_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors - Add `test_exp_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_exp_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_exp.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532358
eaa5f44 to
f92c52e
Compare
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.exp` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_exp_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors - Add `test_exp_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_exp_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_exp.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532358
77603b2 to
d746c8f
Compare
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.exp` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_exp_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors - Add `test_exp_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_exp_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_exp.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532358
f92c52e to
58743d5
Compare
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.exp` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_exp_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors - Add `test_exp_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_exp_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_exp.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532358
58743d5 to
2353c30
Compare
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.exp` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_exp_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors - Add `test_exp_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_exp_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_exp.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532358
2353c30 to
d746c8f
Compare
Summary: Pull Request resolved: pytorch#19591 Add int16 activation / int8 weight (a16w8) quantization tests for `aten.exp` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_exp_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors - Add `test_exp_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_exp_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_exp.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532358
d746c8f to
e3517c4
Compare
476030e to
f05f3f5
Compare
f05f3f5 to
8e44dc7
Compare
Summary: Pull Request resolved: pytorch#19591 Add int16 activation / int8 weight (a16w8) quantization tests for `aten.exp` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_exp_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors - Add `test_exp_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_exp_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_exp.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532358
8e44dc7 to
13a20cf
Compare
Summary:
Add int16 activation / int8 weight (a16w8) quantization tests for
aten.expon Ethos-U55 and Ethos-U85.Changes
a16w8_exp_test_parametersdict with 3 test configurations covering rank-1, rank-2, and rank-3 tensorstest_exp_a16w8_u55_INTusingEthosU55PipelineINTwitha16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16test_exp_a16w8_u85_INTusingEthosU85PipelineINTwith same kwargsops/test_exp.pyinfbcode/andxplat/targets.bzlbypass-pytorch-oss-checks
Differential Revision: D104532358