Skip to content

Conversation

abhinaykukkadapu
Copy link
Contributor

@abhinaykukkadapu abhinaykukkadapu commented Sep 22, 2025

Differential Revision: D82936479

Context:
This diff enables sequential recipes targeting multiple backends (for example: CoreML.FP32 + XNNPack.FP32, with XNNPack as a fallback). I think, we had not tested lowering models to multiple backends, so this edge case was unhandled.

While lowering the Vision Transformer (ViT) model, I encountered issues similar to those previously seen with the SDPA op (discussion). Although a fix exists, it did not account for scenarios with multiple partitioners having conflicting decomposition requirements and op filtering for the no-decomp namespace.

Error scenarios

The core problem: if two partitioners request to preserve different ops (ex: XNNPack wants to preserve aten.max_pool2d, but QNN does not support it), the current logic unions all ops to preserve, causing errors if a backend cannot handle an op.

QNN + XNNPack

[2025-09-22T12:18:33.640-07:00]     partition_list = capability_partitioner.propose_partitions()
[2025-09-22T12:18:33.640-07:00]                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-09-22T12:18:33.640-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torch/fx/passes/infra/partitioner.py", line 226, in propose_partitions
[2025-09-22T12:18:33.640-07:00]     if self._is_node_supported(node) and node not in assignment:
[2025-09-22T12:18:33.640-07:00]        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-09-22T12:18:33.640-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torch/fx/passes/infra/partitioner.py", line 87, in _is_node_supported
[2025-09-22T12:18:33.640-07:00]     return self.operator_support.is_node_supported(
[2025-09-22T12:18:33.640-07:00]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-09-22T12:18:33.640-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/executorch/backends/qualcomm/partition/qnn_partitioner.py", line 100, in is_node_supported
[2025-09-22T12:18:33.640-07:00]     op_wrapper = self.node_visitors[node.target.__name__].define_node(
[2025-09-22T12:18:33.640-07:00]                  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
[2025-09-22T12:18:33.640-07:00] KeyError: 'aten.max_pool2d.default'

CoreML + XNNPack

[2025-09-22T11:01:47.263-07:00] ValueError: Cannot view a tensor with shape torch.Size([197, 1, 12, 64]) and strides (64, 151296, 12608, 1) as a tensor with shape (197, 768)!
[2025-09-22T11:01:47.263-07:00]
[2025-09-22T11:01:47.263-07:00] While executing %view_8 : [num_users=1] = call_function[target=torch.ops.aten.view.default](args = (%permute_6, [197, 768]), kwargs = {})
[2025-09-22T11:01:47.263-07:00] Original traceback:
[2025-09-22T11:01:47.263-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 298, in forward
[2025-09-22T11:01:47.263-07:00]     x = self.encoder(x)
[2025-09-22T11:01:47.263-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 157, in forward
[2025-09-22T11:01:47.263-07:00]     return self.ln(self.layers(self.dropout(input)))
[2025-09-22T11:01:47.263-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 113, in forward
[2025-09-22T11:01:47.263-07:00]     x, _ = self.self_attention(x, x, x, need_weights=False)

Note: Lowering to a single backend (CoreML, XNNPack, or QNN) works as expected, the issue is only hit when there are backend combinations with conflicting expectations.

Changes:

  • Decomposition skipping and filtering are now handled per partitioner, rather than globally.
  • Refactored code for readability by removing too many boolean + if/else checks which are hard to follow.

Copy link

pytorch-bot bot commented Sep 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14458

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 04f413a with merge base 99e4fbe (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 22, 2025
@facebook-github-bot
Copy link
Contributor

@abhinaykukkadapu has exported this pull request. If you are a Meta employee, you can view the originating diff in D82936479.

Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

abhinaykukkadapu added a commit to abhinaykukkadapu/executorch that referenced this pull request Sep 22, 2025
…g expectations are run (pytorch#14458)

Summary: Pull Request resolved: pytorch#14458

Differential Revision: D82936479
@facebook-github-bot
Copy link
Contributor

@abhinaykukkadapu has exported this pull request. If you are a Meta employee, you can view the originating diff in D82936479.

@abhinaykukkadapu abhinaykukkadapu marked this pull request as ready for review September 22, 2025 18:43
@facebook-github-bot
Copy link
Contributor

@abhinaykukkadapu has exported this pull request. If you are a Meta employee, you can view the originating diff in D82936479.

abhinaykukkadapu added a commit to abhinaykukkadapu/executorch that referenced this pull request Sep 22, 2025
…g expectations are run (pytorch#14458)

Summary:
Pull Request resolved: pytorch#14458

Context: I'm trying to enable sequential recipes targeting multiple backends (such as `CoreML.FP32 + XNNPack.FP32`, here xnnpack will be a fallback for the ops) and i don't think we've ever tested lowering a model to multiple backends or this edge case has never been hit.

I've hit this case when i tried to lower vision transformer model (VIT)

I've seen a similar problem occurred with SDPA op (discussed [here](https://fb.workplace.com/groups/pytorch.edge.users/permalink/1796069037930048/)) and has been fixed and i think the fix works but it didn't considered when there multiple partitioners in mind with *conflicting decomposition requirements and filter for op no decomp namespace*.

## Error (using both coreml + xnnpack)

```
[2025-09-22T11:01:47.263-07:00] ValueError: Cannot view a tensor with shape torch.Size([197, 1, 12, 64]) and strides (64, 151296, 12608, 1) as a tensor with shape (197, 768)!
[2025-09-22T11:01:47.263-07:00]
[2025-09-22T11:01:47.263-07:00] While executing %view_8 : [num_users=1] = call_function[target=torch.ops.aten.view.default](args = (%permute_6, [197, 768]), kwargs = {})
[2025-09-22T11:01:47.263-07:00] Original traceback:
[2025-09-22T11:01:47.263-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 298, in forward
[2025-09-22T11:01:47.263-07:00]     x = self.encoder(x)
[2025-09-22T11:01:47.263-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 157, in forward
[2025-09-22T11:01:47.263-07:00]     return self.ln(self.layers(self.dropout(input)))
[2025-09-22T11:01:47.263-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 113, in forward
[2025-09-22T11:01:47.263-07:00]     x, _ = self.self_attention(x, x, x, need_weights=False)
```

**Note**: Lowering to single backend works with both coreml or xnnpack, it is the combination that give this error.

Changes:
- The fix is to run decomposition filtering and skipping per partitioner rather than maintaining same rule in global scope.
- I've additionally refactored the code to make it more readable by removing multiple boolean checks.

Differential Revision: D82936479
…g expectations are run (pytorch#14458)

Summary:

Context: I'm trying to enable sequential recipes targeting multiple backends (such as `CoreML.FP32 + XNNPack.FP32`, here xnnpack will be a fallback for the ops) and i don't think we've ever tested lowering a model to multiple backends or this edge case has never been hit.

I've hit this case when i tried to lower vision transformer model (VIT)

I've seen a similar problem occurred with SDPA op (discussed [here](https://fb.workplace.com/groups/pytorch.edge.users/permalink/1796069037930048/)) and has been fixed and i think the fix works but it didn't considered when there multiple partitioners in mind with *conflicting decomposition requirements and filter for op no decomp namespace*.

## Error with coreml + xnnpack

```
[2025-09-22T11:01:47.263-07:00] ValueError: Cannot view a tensor with shape torch.Size([197, 1, 12, 64]) and strides (64, 151296, 12608, 1) as a tensor with shape (197, 768)!
[2025-09-22T11:01:47.263-07:00]
[2025-09-22T11:01:47.263-07:00] While executing %view_8 : [num_users=1] = call_function[target=torch.ops.aten.view.default](args = (%permute_6, [197, 768]), kwargs = {})
[2025-09-22T11:01:47.263-07:00] Original traceback:
[2025-09-22T11:01:47.263-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 298, in forward
[2025-09-22T11:01:47.263-07:00]     x = self.encoder(x)
[2025-09-22T11:01:47.263-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 157, in forward
[2025-09-22T11:01:47.263-07:00]     return self.ln(self.layers(self.dropout(input)))
[2025-09-22T11:01:47.263-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 113, in forward
[2025-09-22T11:01:47.263-07:00]     x, _ = self.self_attention(x, x, x, need_weights=False)
```

## Error with QNN + XNNPACK

The core problem here is that if there are two partitioners, Backend A partitioner (asking to preserve ops x, y) and Backend B partitioner (asking to preserve ops z), assume Backend A doesn't understand Z and want to decompose, currently we `union` all the ops to preserve from multiple partitioner and it errors out

In this specific case, xnnpack asks to preserve `aten.max_pool2d` but QNN doesn't understand it.
```
[2025-09-22T12:18:33.640-07:00]     partition_list = capability_partitioner.propose_partitions()
[2025-09-22T12:18:33.640-07:00]                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-09-22T12:18:33.640-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torch/fx/passes/infra/partitioner.py", line 226, in propose_partitions
[2025-09-22T12:18:33.640-07:00]     if self._is_node_supported(node) and node not in assignment:
[2025-09-22T12:18:33.640-07:00]        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-09-22T12:18:33.640-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torch/fx/passes/infra/partitioner.py", line 87, in _is_node_supported
[2025-09-22T12:18:33.640-07:00]     return self.operator_support.is_node_supported(
[2025-09-22T12:18:33.640-07:00]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-09-22T12:18:33.640-07:00]   File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/executorch/backends/qualcomm/partition/qnn_partitioner.py", line 100, in is_node_supported
[2025-09-22T12:18:33.640-07:00]     op_wrapper = self.node_visitors[node.target.__name__].define_node(
[2025-09-22T12:18:33.640-07:00]                  ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
[2025-09-22T12:18:33.640-07:00] KeyError: 'aten.max_pool2d.default'
```
**Note**: Lowering to single backend works with both coreml or xnnpack or QNN, it is the combinations that hits this error.

Changes:
- The fix is to run decomposition filtering and skipping per partitioner rather than maintaining same rule in global scope.
- I've additionally refactored the code to make it more readable by removing multiple boolean checks.

Differential Revision: D82936479
@facebook-github-bot
Copy link
Contributor

@abhinaykukkadapu has exported this pull request. If you are a Meta employee, you can view the originating diff in D82936479.

if not can_skip_using_EDGE_DO_NOT_DECOMP:
program = program.run_decompositions(_default_decomposition_table())
_restore_transformed_ops_to_aten_ops(program)
if can_skip_using_edge_do_not_decomp:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but there is still a logical bug in ET's AOT code with the EDGE_DO_NOT_DECOMP namespace / preservation when it comes to SDPA (and maybe other ops).

For CoreML, we get around it by skipping that path, but other backends (e.g., XNNPACK or QNN) will run into it if they preserve SDPA.

Perhaps the runtime time could take a look at this issue? cc @JacobSzwejbka @larryliu0820

@abhinaykukkadapu abhinaykukkadapu merged commit b991271 into pytorch:main Sep 23, 2025
129 of 132 checks passed
@abhinaykukkadapu abhinaykukkadapu deleted the export-D82936479 branch September 23, 2025 18:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants