Skip to content

[ET-VK] Fix force_fp16 texture bias being silently rejected for CONTIGUOUS_ANY ops#18779

Merged
SS-JIA merged 2 commits intomainfrom
gh/SS-JIA/516/orig
Apr 8, 2026
Merged

[ET-VK] Fix force_fp16 texture bias being silently rejected for CONTIGUOUS_ANY ops#18779
SS-JIA merged 2 commits intomainfrom
gh/SS-JIA/516/orig

Conversation

@pytorchbot
Copy link
Copy Markdown
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #18770 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/516/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/516/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/516/orig
Differential Revision: D100004702
@diff-train-skip-merge

…GUOUS_ANY ops

Pull Request resolved: #18770

The `force_fp16` path in `TagMemoryMetaPass` applies `ANY_TEXTURE` to
bias ops toward texture storage. However, `try_constrain_with_arg_repset`
has a packed-dim compatibility check that requires ALL of the source
repset's PDIs to exist in the output repset. `ANY_TEXTURE` has 3 texture
layouts (WP, HP, CP) but `CONTIGUOUS_ANY` outputs only support WP, so
the check fails and the texture bias is silently dropped.

Without the bias, buffer storage cascades from ops that must use buffer
(e.g. embedding with vocab exceeding texture limits) into downstream ops
that could use texture, causing unnecessary buffer↔texture transitions.

Fix: check PDI compatibility against the intersection of arg and source
repsets (what would actually be applied) rather than the raw source. The
intersection of `ANY_TEXTURE ∩ CONTIGUOUS_ANY` = `WIDTH_PACKED_TEXTURE`,
which IS compatible with the output.

Authored by Claude.
ghstack-source-id: 364280901
@exported-using-ghexport

Differential Revision: [D100004702](https://our.internmc.facebook.com/intern/diff/D100004702/)
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 8, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18779

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 37 Pending

As of commit 82044eb with merge base 4afd7f9 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorchbot pytorchbot requested a review from SS-JIA as a code owner April 8, 2026 20:58
@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 8, 2026
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 8, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Pull Request resolved: #18771

When the same tensor is consumed by multiple ops that need a different
storage representation, the pass previously inserted a separate clone
transition for each consumer. Now it caches transition clones keyed by
(source_node, target_storage_type, target_layout) and reuses existing
clones when the same transition is needed again.

For Qwen3 0.6B (8da4w fp16), the embedding output (BUFFER due to
vocab_size exceeding texture limits) feeds both rms_norm and add which
need TEXTURE. Previously 2 clones were inserted; now 1 clone is shared.

Authored by Claude.
ghstack-source-id: 364280900
@exported-using-ghexport

Differential Revision: [D100004700](https://our.internmc.facebook.com/intern/diff/D100004700/)
@SS-JIA SS-JIA merged commit 21d9c64 into main Apr 8, 2026
164 of 166 checks passed
@SS-JIA SS-JIA deleted the gh/SS-JIA/516/orig branch April 8, 2026 21:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants