Skip to content

Conversation

@SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Nov 12, 2025

ssjia added 4 commits November 12, 2025 15:22
As title. The current implementation of split_with_sizes uses functions from the `Copy.[h|cpp]` file in particular `add_copy_channel_offset_node`. However, the shaders dispatched by this function  have a critical bug where the output tensor is passed in separately with difference access types, i.e.

```cpp
graph.execute_nodes().emplace_back(new DispatchNode(
        graph,
        VK_KERNEL_FROM_STR(kernel_name),
        global_size,
        local_size,
        // Inputs and Outputs
        {
            {out, vkapi::kWrite},
            {out, vkapi::kRead},
            {in, vkapi::kRead},
        },
```

This creates many validation layer errors because the memory barriers for the resource cannot be formed properly. The shader essentially relies on undefined behaviour to work correctly

To fix, this diff re-implements the operator from scratch with a dedicated compute shader.

Differential Revision: [D86910642](https://our.internmc.facebook.com/intern/diff/D86910642/)

[ghstack-poisoned]
As title. Make sure that ops that do not support quantized tensors do not get assigned memory layouts that are intended for quantized tensors.

Differential Revision: [D86910639](https://our.internmc.facebook.com/intern/diff/D86910639/)

[ghstack-poisoned]
Title says it all!

Add two additional export options:

1. `skip_memory_planning` - skips the memory planning pass which can be useful for debugging.
2. `small_texture_limits` - sets the default texture limit to be (2048, 2048, 2048) which is compatible with more devices (i.e. desktop/laptop GPUs) compared to the default (16384, 16384, 2048) which is more targeted for mobile GPUs

Also adds some improvements to the export script that were made while debugging the `YOLO_NAS` model (#15700)

Differential Revision: [D86910640](https://our.internmc.facebook.com/intern/diff/D86910640/)

[ghstack-poisoned]
…eMetadata

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15796

Note: Links to docs will display an error until the docs builds have been completed.

❌ 9 New Failures, 4 Unrelated Failures

As of commit b6868b9 with merge base 7600df8 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

SS-JIA pushed a commit that referenced this pull request Nov 12, 2025
…eMetadata

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)

ghstack-source-id: 322864453
Pull Request resolved: #15796
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 12, 2025
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

ssjia added 2 commits November 13, 2025 08:51
…o use BufferMetadata/TextureMetadata"

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)

[ghstack-poisoned]
…data/TextureMetadata"

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Nov 13, 2025
…eMetadata

Pull Request resolved: #15796

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.
ghstack-source-id: 323042847
@exported-using-ghexport

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)
ssjia added 2 commits November 13, 2025 19:33
…o use BufferMetadata/TextureMetadata"

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)

[ghstack-poisoned]
…data/TextureMetadata"

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Nov 14, 2025
…eMetadata

Pull Request resolved: #15796

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.
ghstack-source-id: 323216101
@exported-using-ghexport

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)
ssjia added 2 commits November 14, 2025 07:31
…o use BufferMetadata/TextureMetadata"

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)

[ghstack-poisoned]
…data/TextureMetadata"

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Nov 14, 2025
…eMetadata

Pull Request resolved: #15796

Title says it all!

Motivation: code simplification and allows these ops to handle high dim tensors.
ghstack-source-id: 323317727
@exported-using-ghexport

Differential Revision: [D86910641](https://our.internmc.facebook.com/intern/diff/D86910641/)
SS-JIA added a commit that referenced this pull request Nov 14, 2025
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #15829
* #15796
* #15795
* #15794
* __->__ #15793

As title. The current implementation of split_with_sizes uses functions
from the `Copy.[h|cpp]` file in particular
`add_copy_channel_offset_node`. However, the shaders dispatched by this
function have a critical bug where the output tensor is passed in
separately with difference access types, i.e.

```cpp
graph.execute_nodes().emplace_back(new DispatchNode(
        graph,
        VK_KERNEL_FROM_STR(kernel_name),
        global_size,
        local_size,
        // Inputs and Outputs
        {
            {out, vkapi::kWrite},
            {out, vkapi::kRead},
            {in, vkapi::kRead},
        },
```

This creates many validation layer errors because the memory barriers
for the resource cannot be formed properly. The shader essentially
relies on undefined behaviour to work correctly

To fix, this diff re-implements the operator from scratch with a
dedicated compute shader.

Differential Revision:
[D86910642](https://our.internmc.facebook.com/intern/diff/D86910642/)

---------

Co-authored-by: ssjia <ssjia@devvm26340.ftw0.facebook.com>
SS-JIA added a commit that referenced this pull request Nov 14, 2025
#15794)

Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #15829
* #15796
* #15795
* __->__ #15794
* #15793

As title. Make sure that ops that do not support quantized tensors do
not get assigned memory layouts that are intended for quantized tensors.

Differential Revision:
[D86910639](https://our.internmc.facebook.com/intern/diff/D86910639/)

---------

Co-authored-by: ssjia <ssjia@devvm26340.ftw0.facebook.com>
SS-JIA added a commit that referenced this pull request Nov 14, 2025
)

Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #15829
* #15796
* __->__ #15795
* #15794
* #15793

Title says it all!

Add two additional export options:

1. `skip_memory_planning` - skips the memory planning pass which can be
useful for debugging.
2. `small_texture_limits` - sets the default texture limit to be (2048,
2048, 2048) which is compatible with more devices (i.e. desktop/laptop
GPUs) compared to the default (16384, 16384, 2048) which is more
targeted for mobile GPUs

Also adds some improvements to the export script that were made while
debugging the `YOLO_NAS` model
(#15700)

Differential Revision:
[D86910640](https://our.internmc.facebook.com/intern/diff/D86910640/)

---------

Co-authored-by: ssjia <ssjia@devvm26340.ftw0.facebook.com>
@SS-JIA SS-JIA changed the base branch from gh/SS-JIA/372/base to main November 14, 2025 21:45
@SS-JIA SS-JIA merged commit 053193f into main Nov 14, 2025
160 of 173 checks passed
@SS-JIA SS-JIA deleted the gh/SS-JIA/372/head branch November 14, 2025 21:45
SS-JIA added a commit that referenced this pull request Nov 14, 2025
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* __->__ #15829
* #15796
* #15795
* #15794
* #15793

Title says it all!

Adds `int32` and `uint8` shader variants to a bunch of operators that
don't currently have variants for these dtypes, but should.

This should prevent folks from running into dtype crashes at runtime
when using the Vulkan delegate.

Differential Revision:
[D87082724](https://our.internmc.facebook.com/intern/diff/D87082724/)

Co-authored-by: ssjia <ssjia@devvm1479.ncg0.facebook.com>
SS-JIA added a commit that referenced this pull request Nov 14, 2025
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* __->__ #15829
* #15796
* #15795
* #15794
* #15793

Title says it all!

Adds `int32` and `uint8` shader variants to a bunch of operators that
don't currently have variants for these dtypes, but should.

This should prevent folks from running into dtype crashes at runtime
when using the Vulkan delegate.

Differential Revision:
[D87082724](https://our.internmc.facebook.com/intern/diff/D87082724/)

Co-authored-by: ssjia <ssjia@devvm1479.ncg0.facebook.com>
(cherry picked from commit a6c5921)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants