Skip to content

Conversation

Ninja91
Copy link
Contributor

@Ninja91 Ninja91 commented Sep 3, 2025

Stack from ghstack (oldest at bottom):

This diff implements a 16A8W (16-bit activations, 8-bit weights) quantization configuration utility for the ExecutorTorch ARM backend, following the feedback from D79746479.

Key Changes

1. New Quantization Configuration Function

  • Add get_16a8w_quantization_config() in fbcode/executorch/backends/arm/quantizer/arm_quantizer.py
  • Provides 16-bit activations with HistogramObserver (better precision than 8A8W)
  • Maintains 8-bit weights with MinMaxObserver/PerChannelMinMaxObserver (memory efficient)
  • Technically supported by TOSA through EXT-INT16 extension/profile

Benefits

  • Better Precision: 16-bit activations provide higher precision than 8-bit. Useful for carrying precision for recurring neural nets.
    @exported-using-ghexport

@bypass-github-export-checks
@bypass-github-pytorch-ci-checks
@bypass-github-executorch-ci-checks

Differential Revision: D81550512

This diff implements a 16A8W (16-bit activations, 8-bit weights) quantization configuration utility for the ExecutorTorch ARM backend, following the feedback from D79746479.

## Key Changes

**1. New Quantization Configuration Function**
- Add `get_16a8w_quantization_config()` in `fbcode/executorch/backends/arm/quantizer/arm_quantizer.py`
- Provides 16-bit activations with HistogramObserver (better precision than 8A8W)
- Maintains 8-bit weights with MinMaxObserver/PerChannelMinMaxObserver (memory efficient)
- **Technically supported by TOSA through [EXT-INT16 extension/profile](https://www.mlplatform.org/tosa/tosa_spec.html#_conv2d)**

## Benefits
- **Better Precision**: 16-bit activations provide higher precision than 8-bit. Useful for carrying precision for recurring neural nets.
ghstack-source-id: 305991462
@exported-using-ghexport

@bypass-github-export-checks
@bypass-github-pytorch-ci-checks
@bypass-github-executorch-ci-checks

Differential Revision: [D81550512](https://our.internmc.facebook.com/intern/diff/D81550512/)

[ghstack-poisoned]
@Ninja91 Ninja91 requested a review from digantdesai as a code owner September 3, 2025 05:25
Copy link

pytorch-bot bot commented Sep 3, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13898

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit 878f63f with merge base ae07cb6 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 3, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81550512

This diff implements a 16A8W (16-bit activations, 8-bit weights) quantization configuration utility for the ExecutorTorch ARM backend, following the feedback from D79746479.

## Key Changes

**1. New Quantization Configuration Function**
- Add `get_16a8w_quantization_config()` in `fbcode/executorch/backends/arm/quantizer/arm_quantizer.py`
- Provides 16-bit activations with HistogramObserver (better precision than 8A8W)
- Maintains 8-bit weights with MinMaxObserver/PerChannelMinMaxObserver (memory efficient)
- **Technically supported by TOSA through [EXT-INT16 extension/profile](https://www.mlplatform.org/tosa/tosa_spec.html#_conv2d)**

## Benefits
- **Better Precision**: 16-bit activations provide higher precision than 8-bit. Useful for carrying precision for recurring neural nets.
exported-using-ghexport

bypass-github-export-checks
bypass-github-pytorch-ci-checks
bypass-github-executorch-ci-checks

Differential Revision: [D81550512](https://our.internmc.facebook.com/intern/diff/D81550512/)

[ghstack-poisoned]
Ninja91 added a commit that referenced this pull request Sep 3, 2025
Pull Request resolved: #13898

This diff implements a 16A8W (16-bit activations, 8-bit weights) quantization configuration utility for the ExecutorTorch ARM backend, following the feedback from D79746479.

## Key Changes

**1. New Quantization Configuration Function**
- Add `get_16a8w_quantization_config()` in `fbcode/executorch/backends/arm/quantizer/arm_quantizer.py`
- Provides 16-bit activations with HistogramObserver (better precision than 8A8W)
- Maintains 8-bit weights with MinMaxObserver/PerChannelMinMaxObserver (memory efficient)
- **Technically supported by TOSA through [EXT-INT16 extension/profile](https://www.mlplatform.org/tosa/tosa_spec.html#_conv2d)**

## Benefits
- **Better Precision**: 16-bit activations provide higher precision than 8-bit. Useful for carrying precision for recurring neural nets.
ghstack-source-id: 307143911
@exported-using-ghexport

ghstack-source-id: 307143911

Differential Revision: [D81550512](https://our.internmc.facebook.com/intern/diff/D81550512/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81550512

This diff implements a 16A8W (16-bit activations, 8-bit weights) quantization configuration utility for the ExecutorTorch ARM backend, following the feedback from D79746479.

## Key Changes

**1. New Quantization Configuration Function**
- Add `get_16a8w_quantization_config()` in `fbcode/executorch/backends/arm/quantizer/arm_quantizer.py`
- Provides 16-bit activations with HistogramObserver (better precision than 8A8W)
- Maintains 8-bit weights with MinMaxObserver/PerChannelMinMaxObserver (memory efficient)
- **Technically supported by TOSA through [EXT-INT16 extension/profile](https://www.mlplatform.org/tosa/tosa_spec.html#_conv2d)**

## Benefits
- **Better Precision**: 16-bit activations provide higher precision than 8-bit. Useful for carrying precision for recurring neural nets.
exported-using-ghexport

bypass-github-export-checks
bypass-github-pytorch-ci-checks
bypass-github-executorch-ci-checks

Differential Revision: [D81550512](https://our.internmc.facebook.com/intern/diff/D81550512/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81550512

This diff implements a 16A8W (16-bit activations, 8-bit weights) quantization configuration utility for the ExecutorTorch ARM backend, following the feedback from D79746479.

## Key Changes

**1. New Quantization Configuration Function**
- Add `get_16a8w_quantization_config()` in `fbcode/executorch/backends/arm/quantizer/arm_quantizer.py`
- Provides 16-bit activations with HistogramObserver (better precision than 8A8W)
- Maintains 8-bit weights with MinMaxObserver/PerChannelMinMaxObserver (memory efficient)
- **Technically supported by TOSA through [EXT-INT16 extension/profile](https://www.mlplatform.org/tosa/tosa_spec.html#_conv2d)**

## Benefits
- **Better Precision**: 16-bit activations provide higher precision than 8-bit. Useful for carrying precision for recurring neural nets.
exported-using-ghexport

bypass-github-export-checks
bypass-github-pytorch-ci-checks
bypass-github-executorch-ci-checks

Differential Revision: [D81550512](https://our.internmc.facebook.com/intern/diff/D81550512/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81550512

@jackzhxng
Copy link
Contributor

Hi @Ninja91 please add release notes: arm label to these PRs so we can call out your work in our next release notes!

@Ninja91 Ninja91 added the release notes: arm Changes to the ARM backend delegate label Sep 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported release notes: arm Changes to the ARM backend delegate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants