[ez][ET-VK][partitioner] Allow layout-agnostic ops to accept quantized layouts#19436
[ez][ET-VK][partitioner] Allow layout-agnostic ops to accept quantized layouts#19436
Conversation
…d layouts Pull Request resolved: #19395 Two changes that together let the partitioner keep PACKED_INT8 layouts flowing through identity-like ops, eliminating spurious clone dispatches: 1. utils.py: ANY_STORAGE_INCL_PACKED_INT8 (renamed from ALL_STORAGES_REPSET) previously claimed every layout (including PACKED_INT8_*) on the texture side, but PACKED_INT8 is buffer-only by convention — the texture indexing helpers and required_image_extents don't know about quantized layouts. Narrow the texture side to all_memory_layouts (float-only). Every existing call site is either an intersection identity or a wildcard for non-tensor / not-yet-prepacked args, so this narrow is non-breaking; and now the repset can act as a true universal set when intersected against quant-aware repsets. The new name slots cleanly next to ANY_STORAGE / ANY_BUFFER / ANY_TEXTURE and tells the reader exactly what is added: "like ANY_STORAGE, but also admits PACKED_INT8 (on the buffer side)". 2. op_registry.py: switch view_copy / clone / _clone_dim_order / alias_copy from inputs_storage=ANY_STORAGE to inputs_storage=ANY_STORAGE_INCL_PACKED_INT8. ANY_STORAGE is float-only, so when one of these no-op identity ops sits between two q8ta ops the BFS in TagMemoryMetaPass.constrain_op_*_repset short-circuits (zero overlap with PACKED_INT8_BUFFER) and forces transitions on both sides. With ANY_STORAGE_INCL_PACKED_INT8 they now admit both float and quantized layouts and the redundant-op transform folds them away. The 31 other ops using ANY_STORAGE are real compute ops (binaryop, comparison, softmax, argreduce, permute_copy, etc.) whose float-only kernels do not accept quantized int8x4 layouts (q8ta_* are separate ops); leaving those alone. On RefineNet 24feat (1x3x256x144) the 8 _clone_dim_order ops the partitioner had been inserting around the 4 fused q8ta_pixel_shuffle nodes are now folded by the delegate. Runtime q8ta_clone dispatches drop from 11 to 3 (the 3 residuals are unrelated, from the original model graph). ghstack-source-id: 379519734 @exported-using-ghexport Differential Revision: [D103770022](https://our.internmc.facebook.com/intern/diff/D103770022/)
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19436
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ⏳ No Failures, 2 PendingAs of commit 9ab1d53 with merge base c564936 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
Pull Request resolved: #19396 The pointwise quantized conv shader allocated ivec4 out_accum[4][2] = 32 int32 accumulators per thread, which on Adreno 740 pinned 28 full-precision registers per thread and capped ALU fiber occupancy at 37%. AOC reported 26.7% exposed long-latency stalls, evidence that occupancy was too low to hide texture and SSBO latency. Halve the accumulator to 16 ints by reducing TILE_N4 from 2 to 1 (each thread now covers 4 widths × 4 output channels = a single 4×4 output block). The compensating dispatch change is in pick_q8ta_conv2d_pw_global_wg_size: global_wg.x doubles since each thread covers half as many output channel blocks as before. Each thread still loads 1 input ivec4 (4 widths) per K-iter, preserving the natural int8x4 packing alignment, so arithmetic intensity drops only 25% (2.67 → 2.0 MAC/B, in contrast to the variant where TILE_M is halved which drops AI by 50%). ghstack-source-id: 379519735 @exported-using-ghexport Differential Revision: [D103770023](https://our.internmc.facebook.com/intern/diff/D103770023/)
The previous commit on this stack added the fused `q8ta_pixel_shuffle` custom op and, to make pattern matching easier, added `aten.pixel_shuffle.default` to the partitioner's `ops_not_to_decompose` list. That change had a side effect: any non-quantized model that uses `aten.pixel_shuffle.default` now reaches the Vulkan backend with the op intact, but the backend had no implementation registered for it, so those models fail to lower. This commit adds a layout- and dtype-agnostic implementation of `aten.pixel_shuffle.default` so existing models keep working. The implementation rearranges `(N, C*r*r, H, W)` -> `(N, C, H*r, W*r)`, where output element `(n, c, h_out, w_out)` reads from input element `(n, c*r*r + (h_out%r)*r + (w_out%r), h_out/r, w_out/r)`. Two compute shaders are added because the work-assignment paradigm differs between storage types: - `pixel_shuffle_buffer.glsl` assigns one thread per output element and uses `linear_idx_to_tensor_idx` against the output `BufferMetadata`, which makes it agnostic to the underlying `dim_order`. - `pixel_shuffle_texture.glsl` assigns one thread per output texel and uses `TextureMetadata` plus `indexing.glslh` helpers so the same shader handles channels-, width-, and height-packed layouts. The texture shader uses the `safe_idx` / `safe_set` if/else helpers everywhere a UBO-backed `ivec4` is indexed by a spec-constant-derived value, to avoid the Adreno 740 SPIR-V compiler crash on `ubo_struct.sizes[spec_const]` when the spec const resolves to 1 or 2. The buffer shader does not dynamically index any UBO `ivec4`. Op registration: `register_pixel_shuffle()` in `op_registry.py` uses `ANY_STORAGE`, `FP_T`, and `supports_resize=True`, so the partitioner accepts both storage types and both fp32/fp16, across all packed layouts. Differential Revision: [D104462059](https://our.internmc.facebook.com/intern/diff/D104462059/) ghstack-source-id: 379519849 Pull Request resolved: #19404
…nels-packed int8 tensors Pull Request resolved: #19397 A RefineNet segmentation model spends ~860 us (~17% of inference) on the textbook decomposed PyTorch PixelShuffle chain (q8ta_dequantize -> view -> permute -> view -> q8ta_quantize) repeated four times in the segmentation head. This is wasteful: it materializes three buffers and round-trips through fp32 just to perform what is fundamentally a byte permutation on an int8 tensor. This diff introduces et_vk.q8ta_pixel_shuffle.default, a single fused kernel that operates directly on int8x4 packed buffers. Each thread writes one output int32 word (= 4 consecutive output channels at one (n, oh, ow) spatial position). Dispatch is 1D over total output int words, sized as N * div_up_4(C_out) * H_out * W_out with a 64-thread local workgroup. The four channel lanes inside an output int come from four different input int words (input channels are spaced by r*r), so each thread issues four input loads. The (oh % r, ow % r) -> input lane mapping is constant for a given thread because all four output lanes share (oh, ow). The first byte index is computed via the layout-aware helper tensor4d_idx_to_buf_idx; subsequent lanes derive their byte index by adding stride[packed_dim] * block_numel, a layout-only constant, so only one helper call is needed per thread. When input/output share scale and zero-point (the typical residual-path case), the requantize math is skipped and the kernel becomes a pure byte shuffle (selected via the passthrough push constant). The op accepts the channels-packed PACKED_INT8 family (PACKED_INT8_4W4C, PACKED_INT8_4C1W, PACKED_INT8_CONV2D) on both input and output. The partitioner routes the op into whichever channels-packed layout the surrounding q8ta_conv2d_pw / q8ta_add ops produce/consume (PACKED_INT8_4W4C on RefineNet). Restricting to the channels-packed family means the inner block axis is always C and the lane within an int word is constant per thread, which removes the need for layout-block-config spec consts in the shader. Rather than matching the decomposed view -> permute -> view chain after to_edge lowering, this diff preserves aten.pixel_shuffle.default through to_edge by adding it to the partitioner's ops_to_not_decompose list. The matcher then operates on the much simpler dq -> [clone] -> aten.pixel_shuffle.default -> [clone] -> q form. This keeps the matcher robust against edge-dialect / clone-insertion variations. Pieces in this diff: - Partitioner / fuser: - partitioner/vulkan_partitioner.py — adds aten.pixel_shuffle.default to ops_to_not_decompose so the framework preserves the op through to_edge lowering. - patterns/quantized_pixel_shuffle.py — detects dq -> [clone] -> aten.pixel_shuffle.default -> [clone] -> q and rewrites it to et_vk.q8ta_pixel_shuffle.default. Transparently skips clone / _clone_dim_order nodes between any pair of nodes. - Runtime kernel: - runtime/graph/ops/glsl/q8ta_pixel_shuffle.glsl + .yaml - runtime/graph/ops/impl/Q8taPixelShuffle.cpp + .h - Op definitions: - custom_ops_lib.py: register et_vk.q8ta_pixel_shuffle (Python op definition). - op_registry.py: inputs_storage = utils.PACKED_INT8_CHANNELS_PACKED_BUFFER. - Tests: - test/custom_ops/impl/TestQ8taPixelShuffle.cpp: test op that runs q -> [fused | unfused chain] -> dq, with selectable input/output int8 layouts via str args. The op accepts the channels-packed family; the layout_from_string helper currently exercises 4W4C. - test/custom_ops/test_q8ta_pixel_shuffle.cpp: 16 ACCU + 8 PERF cases (4 shapes x 2 qparam settings x 2 impl_selectors x 1 layout combination, 4W4C -> 4W4C). - test/test_vulkan_passes.py: positive and negative pattern-matcher unit tests against the un-decomposed form. ghstack-source-id: 379519848 @exported-using-ghexport Differential Revision: [D104099055](https://our.internmc.facebook.com/intern/diff/D104099055/)
This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #19395 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/526/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/526/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/526/orig
Differential Revision: D103770022
@diff-train-skip-merge