-
Notifications
You must be signed in to change notification settings - Fork 683
Fix op decomposition issue when multiple partitioners with conflicting expectations are run #14458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix op decomposition issue when multiple partitioners with conflicting expectations are run #14458
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14458
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 04f413a with merge base 99e4fbe ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@abhinaykukkadapu has exported this pull request. If you are a Meta employee, you can view the originating diff in D82936479. |
This PR needs a
|
0bd992e
to
abf4d67
Compare
…g expectations are run (pytorch#14458) Summary: Pull Request resolved: pytorch#14458 Differential Revision: D82936479
@abhinaykukkadapu has exported this pull request. If you are a Meta employee, you can view the originating diff in D82936479. |
@abhinaykukkadapu has exported this pull request. If you are a Meta employee, you can view the originating diff in D82936479. |
…g expectations are run (pytorch#14458) Summary: Pull Request resolved: pytorch#14458 Context: I'm trying to enable sequential recipes targeting multiple backends (such as `CoreML.FP32 + XNNPack.FP32`, here xnnpack will be a fallback for the ops) and i don't think we've ever tested lowering a model to multiple backends or this edge case has never been hit. I've hit this case when i tried to lower vision transformer model (VIT) I've seen a similar problem occurred with SDPA op (discussed [here](https://fb.workplace.com/groups/pytorch.edge.users/permalink/1796069037930048/)) and has been fixed and i think the fix works but it didn't considered when there multiple partitioners in mind with *conflicting decomposition requirements and filter for op no decomp namespace*. ## Error (using both coreml + xnnpack) ``` [2025-09-22T11:01:47.263-07:00] ValueError: Cannot view a tensor with shape torch.Size([197, 1, 12, 64]) and strides (64, 151296, 12608, 1) as a tensor with shape (197, 768)! [2025-09-22T11:01:47.263-07:00] [2025-09-22T11:01:47.263-07:00] While executing %view_8 : [num_users=1] = call_function[target=torch.ops.aten.view.default](args = (%permute_6, [197, 768]), kwargs = {}) [2025-09-22T11:01:47.263-07:00] Original traceback: [2025-09-22T11:01:47.263-07:00] File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 298, in forward [2025-09-22T11:01:47.263-07:00] x = self.encoder(x) [2025-09-22T11:01:47.263-07:00] File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 157, in forward [2025-09-22T11:01:47.263-07:00] return self.ln(self.layers(self.dropout(input))) [2025-09-22T11:01:47.263-07:00] File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 113, in forward [2025-09-22T11:01:47.263-07:00] x, _ = self.self_attention(x, x, x, need_weights=False) ``` **Note**: Lowering to single backend works with both coreml or xnnpack, it is the combination that give this error. Changes: - The fix is to run decomposition filtering and skipping per partitioner rather than maintaining same rule in global scope. - I've additionally refactored the code to make it more readable by removing multiple boolean checks. Differential Revision: D82936479
abf4d67
to
e593a22
Compare
…g expectations are run (pytorch#14458) Summary: Context: I'm trying to enable sequential recipes targeting multiple backends (such as `CoreML.FP32 + XNNPack.FP32`, here xnnpack will be a fallback for the ops) and i don't think we've ever tested lowering a model to multiple backends or this edge case has never been hit. I've hit this case when i tried to lower vision transformer model (VIT) I've seen a similar problem occurred with SDPA op (discussed [here](https://fb.workplace.com/groups/pytorch.edge.users/permalink/1796069037930048/)) and has been fixed and i think the fix works but it didn't considered when there multiple partitioners in mind with *conflicting decomposition requirements and filter for op no decomp namespace*. ## Error with coreml + xnnpack ``` [2025-09-22T11:01:47.263-07:00] ValueError: Cannot view a tensor with shape torch.Size([197, 1, 12, 64]) and strides (64, 151296, 12608, 1) as a tensor with shape (197, 768)! [2025-09-22T11:01:47.263-07:00] [2025-09-22T11:01:47.263-07:00] While executing %view_8 : [num_users=1] = call_function[target=torch.ops.aten.view.default](args = (%permute_6, [197, 768]), kwargs = {}) [2025-09-22T11:01:47.263-07:00] Original traceback: [2025-09-22T11:01:47.263-07:00] File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 298, in forward [2025-09-22T11:01:47.263-07:00] x = self.encoder(x) [2025-09-22T11:01:47.263-07:00] File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 157, in forward [2025-09-22T11:01:47.263-07:00] return self.ln(self.layers(self.dropout(input))) [2025-09-22T11:01:47.263-07:00] File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torchvision/models/vision_transformer.py", line 113, in forward [2025-09-22T11:01:47.263-07:00] x, _ = self.self_attention(x, x, x, need_weights=False) ``` ## Error with QNN + XNNPACK The core problem here is that if there are two partitioners, Backend A partitioner (asking to preserve ops x, y) and Backend B partitioner (asking to preserve ops z), assume Backend A doesn't understand Z and want to decompose, currently we `union` all the ops to preserve from multiple partitioner and it errors out In this specific case, xnnpack asks to preserve `aten.max_pool2d` but QNN doesn't understand it. ``` [2025-09-22T12:18:33.640-07:00] partition_list = capability_partitioner.propose_partitions() [2025-09-22T12:18:33.640-07:00] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [2025-09-22T12:18:33.640-07:00] File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torch/fx/passes/infra/partitioner.py", line 226, in propose_partitions [2025-09-22T12:18:33.640-07:00] if self._is_node_supported(node) and node not in assignment: [2025-09-22T12:18:33.640-07:00] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [2025-09-22T12:18:33.640-07:00] File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/torch/fx/passes/infra/partitioner.py", line 87, in _is_node_supported [2025-09-22T12:18:33.640-07:00] return self.operator_support.is_node_supported( [2025-09-22T12:18:33.640-07:00] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [2025-09-22T12:18:33.640-07:00] File "/data/users/abhinayk/fbsource/buck-out/v2/gen/fbcode/afd2a63214a057a8/executorch/export/tests/__test_target_recipes__/test_target_recipes#link-tree/executorch/backends/qualcomm/partition/qnn_partitioner.py", line 100, in is_node_supported [2025-09-22T12:18:33.640-07:00] op_wrapper = self.node_visitors[node.target.__name__].define_node( [2025-09-22T12:18:33.640-07:00] ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^ [2025-09-22T12:18:33.640-07:00] KeyError: 'aten.max_pool2d.default' ``` **Note**: Lowering to single backend works with both coreml or xnnpack or QNN, it is the combinations that hits this error. Changes: - The fix is to run decomposition filtering and skipping per partitioner rather than maintaining same rule in global scope. - I've additionally refactored the code to make it more readable by removing multiple boolean checks. Differential Revision: D82936479
e593a22
to
04f413a
Compare
@abhinaykukkadapu has exported this pull request. If you are a Meta employee, you can view the originating diff in D82936479. |
if not can_skip_using_EDGE_DO_NOT_DECOMP: | ||
program = program.run_decompositions(_default_decomposition_table()) | ||
_restore_transformed_ops_to_aten_ops(program) | ||
if can_skip_using_edge_do_not_decomp: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but there is still a logical bug in ET's AOT code with the EDGE_DO_NOT_DECOMP namespace / preservation when it comes to SDPA (and maybe other ops).
For CoreML, we get around it by skipping that path, but other backends (e.g., XNNPACK or QNN) will run into it if they preserve SDPA.
Perhaps the runtime time could take a look at this issue? cc @JacobSzwejbka @larryliu0820
Differential Revision: D82936479
Context:
This diff enables sequential recipes targeting multiple backends (for example:
CoreML.FP32 + XNNPack.FP32
, with XNNPack as a fallback). I think, we had not tested lowering models to multiple backends, so this edge case was unhandled.While lowering the Vision Transformer (ViT) model, I encountered issues similar to those previously seen with the SDPA op (discussion). Although a fix exists, it did not account for scenarios with multiple partitioners having conflicting decomposition requirements and op filtering for the no-decomp namespace.
Error scenarios
The core problem: if two partitioners request to preserve different ops (ex: XNNPack wants to preserve
aten.max_pool2d
, but QNN does not support it), the current logic unions all ops to preserve, causing errors if a backend cannot handle an op.QNN + XNNPack
CoreML + XNNPack
Changes: