Skip to content

Const tensors with non-default dim order partition but fail to execute on XNNPACK #14735

@GregoryComer

Description

@GregoryComer

🐛 Describe the bug

When a model contains a constant tensor with non-default dim order (such as from a permute call), XNNPACK will consume it but fail at runtime (error: invalid parameter).

Repro:

import torch
from executorch.exir import to_edge_transform_and_lower
from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner

class Model(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.const_tensor = torch.randn(20, 10, 20).permute(2, 0, 1)

    def forward(self, x):
        return x + self.const_tensor

model = Model()
ep = torch.export.export(model, (torch.randn(20, 20, 10),))

lowered = to_edge_transform_and_lower(
    ep,
    partitioner=[XnnpackPartitioner()],
).to_executorch()

from executorch.extension.pybindings.portable_lib import _load_for_executorch_from_buffer
et_model = _load_for_executorch_from_buffer(lowered.buffer)

inputs = torch.randn(20, 20, 10)
et_model(inputs)

Output:

[XNNExecutor.cpp:137] Internal Error: Propagating input shapes failed with code: xnn_status_invalid_parameter
[method.cpp:1397] CALL_DELEGATE execute failed at instruction 0: 0x1

Interestingly, it also fails on portable due to a dim order mismatch, though I'm not sure if this is considered a bug or simply a limitation of our implementation.

Versions

426b701

cc @digantdesai @mcr229 @cbilgin

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: xnnpackIssues related to xnnpack delegation and the code under backends/xnnpack/

    Type

    Projects

    Status

    To triage

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions