Skip to content

[ET-VK] Fix mixed-dtype binary ops and comparison op padding bugs#17862

Merged
SS-JIA merged 2 commits intogh/SS-JIA/457/origfrom
gh/SS-JIA/458/orig
Mar 5, 2026
Merged

[ET-VK] Fix mixed-dtype binary ops and comparison op padding bugs#17862
SS-JIA merged 2 commits intogh/SS-JIA/457/origfrom
gh/SS-JIA/458/orig

Conversation

@pytorchbot
Copy link
Copy Markdown
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #17849 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/458/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/458/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/457/orig
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/458/orig
Differential Revision: D95217948
@diff-train-skip-merge

Two bugs caused incorrect outputs in models with mixed-dtype binary operations
(e.g. EdgeTAM remaining frames):

1. Mixed-dtype binary ops (e.g. int arange vs float tensor) were fed to shaders
   that declare both inputs with the same DTYPE, causing data misinterpretation.
   This is now fixed by adding an `InsertDtypePromotionPass` export pass that
   inserts `_to_copy` nodes to promote inputs to a common dtype at compile time.
   The `_to_copy` op is extended to support int<->float conversions via new
   `view_convert_texture` shaders, and the previous float/half-only restriction
   in ToCopy.cpp is replaced with branching logic that uses BlitNode for
   same-dtype/float<->half and view_convert shaders for other conversions.

2. Texture3d comparison operators (gt, lt, le, ge, eq) used `all()` to reduce
   component-wise `bvec4` results to a single bool. With packed textures where
   padding components are zero, `all()` always returned false because padding
   zeros fail comparison against non-zero values. Fixed by removing `all()` so
   the result stays as a component-wise `bvec4`, which is correctly converted to
   `uvec4` for the Bool output texture.

Additional changes:
- New `view_convert_texture.glsl` shader and YAML for texture dtype conversion
- `add_view_copy_convert_texture_node` added to View.cpp/h
- `_to_copy` op registry updated to accept int dtypes (FP_INT_T)

Differential Revision: [D95217948](https://our.internmc.facebook.com/intern/diff/D95217948/)

ghstack-source-id: 347411474
Pull Request resolved: #17849
@pytorchbot pytorchbot requested a review from SS-JIA as a code owner March 4, 2026 23:44
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Mar 4, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17862

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 4, 2026
…g ops

The insert_prepack_nodes pass was skipping prepack node insertion for all
constant tensor args of ops with supports_prepacking=True. However, these ops
only handle prepacking for weight/bias tensors internally; the primary input
tensor is still expected to be a GPU tensor. If the primary input happens to be
a constant tensor (serialized as TensorRef), the op throws an exception at
runtime.

Fix this by detecting the primary input index directly in insert_prepack_nodes.
Most prepacking ops have the primary input at arg 0, but embedding uses arg 1
since its signature is embedding(weight, indices, ...). The pass now checks
whether a constant tensor is used as the primary input of a prepacking op, and
if so, still inserts a prepack node for it.

Differential Revision: [D95217949](https://our.internmc.facebook.com/intern/diff/D95217949/)

ghstack-source-id: 347411473
Pull Request resolved: #17850
@SS-JIA SS-JIA merged commit 8691147 into gh/SS-JIA/457/orig Mar 5, 2026
32 of 33 checks passed
@SS-JIA SS-JIA deleted the gh/SS-JIA/458/orig branch March 5, 2026 00:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants