Skip to content

[ez][ET-VK][q8ta_conv2d_pw] Halve accumulator to lift Adreno occupancy#19396

Merged
meta-codesync[bot] merged 2 commits intogh/SS-JIA/527/basefrom
gh/SS-JIA/527/head
May 9, 2026
Merged

[ez][ET-VK][q8ta_conv2d_pw] Halve accumulator to lift Adreno occupancy#19396
meta-codesync[bot] merged 2 commits intogh/SS-JIA/527/basefrom
gh/SS-JIA/527/head

Conversation

@SS-JIA
Copy link
Copy Markdown
Contributor

@SS-JIA SS-JIA commented May 8, 2026

Stack from ghstack (oldest at bottom):

The pointwise quantized conv shader allocated ivec4 out_accum[4][2] = 32 int32 accumulators per thread, which on Adreno 740 pinned 28 full-precision registers per thread and capped ALU fiber occupancy at 37%. AOC reported 26.7% exposed long-latency stalls, evidence that occupancy was too low to hide texture and SSBO latency. Halve the accumulator to 16 ints by reducing TILE_N4 from 2 to 1 (each thread now covers 4 widths × 4 output channels = a single 4×4 output block). The compensating dispatch change is in pick_q8ta_conv2d_pw_global_wg_size: global_wg.x doubles since each thread covers half as many output channel blocks as before. Each thread still loads 1 input ivec4 (4 widths) per K-iter, preserving the natural int8x4 packing alignment, so arithmetic intensity drops only 25% (2.67 → 2.0 MAC/B, in contrast to the variant where TILE_M is halved which drops AI by 50%).

Differential Revision: D103770023

The pointwise quantized conv shader allocated ivec4 out_accum[4][2] = 32 int32 accumulators per thread, which on Adreno 740 pinned 28 full-precision registers per thread and capped ALU fiber occupancy at 37%. AOC reported 26.7% exposed long-latency stalls, evidence that occupancy was too low to hide texture and SSBO latency. Halve the accumulator to 16 ints by reducing TILE_N4 from 2 to 1 (each thread now covers 4 widths × 4 output channels = a single 4×4 output block). The compensating dispatch change is in pick_q8ta_conv2d_pw_global_wg_size: global_wg.x doubles since each thread covers half as many output channel blocks as before. Each thread still loads 1 input ivec4 (4 widths) per K-iter, preserving the natural int8x4 packing alignment, so arithmetic intensity drops only 25% (2.67 → 2.0 MAC/B, in contrast to the variant where TILE_M is halved which drops AI by 50%).

Differential Revision: [D103770023](https://our.internmc.facebook.com/intern/diff/D103770023/)

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 8, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19396

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit 635b28a with merge base c564936 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 8, 2026
@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 8, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

…no occupancy"

The pointwise quantized conv shader allocated ivec4 out_accum[4][2] = 32 int32 accumulators per thread, which on Adreno 740 pinned 28 full-precision registers per thread and capped ALU fiber occupancy at 37%. AOC reported 26.7% exposed long-latency stalls, evidence that occupancy was too low to hide texture and SSBO latency. Halve the accumulator to 16 ints by reducing TILE_N4 from 2 to 1 (each thread now covers 4 widths × 4 output channels = a single 4×4 output block). The compensating dispatch change is in pick_q8ta_conv2d_pw_global_wg_size: global_wg.x doubles since each thread covers half as many output channel blocks as before. Each thread still loads 1 input ivec4 (4 widths) per K-iter, preserving the natural int8x4 packing alignment, so arithmetic intensity drops only 25% (2.67 → 2.0 MAC/B, in contrast to the variant where TILE_M is halved which drops AI by 50%).

Differential Revision: [D103770023](https://our.internmc.facebook.com/intern/diff/D103770023/)

[ghstack-poisoned]
@meta-codesync meta-codesync Bot merged commit f691886 into gh/SS-JIA/527/base May 9, 2026
175 checks passed
@meta-codesync meta-codesync Bot deleted the gh/SS-JIA/527/head branch May 9, 2026 04:57
@meta-codesync meta-codesync Bot temporarily deployed to cherry-pick-bot May 9, 2026 04:57 Inactive
SS-JIA pushed a commit that referenced this pull request May 9, 2026
Pull Request resolved: #19396

The pointwise quantized conv shader allocated ivec4 out_accum[4][2] = 32 int32 accumulators per thread, which on Adreno 740 pinned 28 full-precision registers per thread and capped ALU fiber occupancy at 37%. AOC reported 26.7% exposed long-latency stalls, evidence that occupancy was too low to hide texture and SSBO latency. Halve the accumulator to 16 ints by reducing TILE_N4 from 2 to 1 (each thread now covers 4 widths × 4 output channels = a single 4×4 output block). The compensating dispatch change is in pick_q8ta_conv2d_pw_global_wg_size: global_wg.x doubles since each thread covers half as many output channel blocks as before. Each thread still loads 1 input ivec4 (4 widths) per K-iter, preserving the natural int8x4 packing alignment, so arithmetic intensity drops only 25% (2.67 → 2.0 MAC/B, in contrast to the variant where TILE_M is halved which drops AI by 50%).
ghstack-source-id: 379519735
@exported-using-ghexport

Differential Revision: [D103770023](https://our.internmc.facebook.com/intern/diff/D103770023/)
SS-JIA pushed a commit that referenced this pull request May 9, 2026
Pull Request resolved: #19396

The pointwise quantized conv shader allocated ivec4 out_accum[4][2] = 32 int32 accumulators per thread, which on Adreno 740 pinned 28 full-precision registers per thread and capped ALU fiber occupancy at 37%. AOC reported 26.7% exposed long-latency stalls, evidence that occupancy was too low to hide texture and SSBO latency. Halve the accumulator to 16 ints by reducing TILE_N4 from 2 to 1 (each thread now covers 4 widths × 4 output channels = a single 4×4 output block). The compensating dispatch change is in pick_q8ta_conv2d_pw_global_wg_size: global_wg.x doubles since each thread covers half as many output channel blocks as before. Each thread still loads 1 input ivec4 (4 widths) per K-iter, preserving the natural int8x4 packing alignment, so arithmetic intensity drops only 25% (2.67 → 2.0 MAC/B, in contrast to the variant where TILE_M is halved which drops AI by 50%).
ghstack-source-id: 379519735
@exported-using-ghexport

Differential Revision: [D103770023](https://our.internmc.facebook.com/intern/diff/D103770023/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants