Open
Description
When run e2e tests for tosa
and onnx_tosa
configs, a lot of tests unexpectedly failed with the following error:
error: failed to legalize operation 'bufferization.dealloc' that was explicitly marked illegal
For example, when run
./projects/pt1/tools/e2e_test.sh -c onnx_tosa --filter ElementwiseAtenLogicalAndOpModule_basic -v
TORCH_VERSION_FOR_COMPARISON = 2.8.0.dev20250325
Running tests sequentially with progress status
*** RUNNING TEST: ElementwiseAtenLogicalAndOpModule_basic ***
Compiling ElementwiseAtenLogicalAndOpModule_basic...
====================
ONNX RAW IR
module {
func.func @main_graph(%arg0: !torch.vtensor<[?,?],i1>, %arg1: !torch.vtensor<[?,?],i1>) -> !torch.vtensor<[?,?],i1> attributes {torch.onnx_meta.ir_version = 9 : si64, torch.onnx_meta.opset_version = 20 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "2.8.0"} {
%none = torch.constant.none
%0 = torch.operator "onnx.Cast"(%arg0) {torch.onnx.to = 9 : si64} : (!torch.vtensor<[?,?],i1>) -> !torch.vtensor<[?,?],i1>
%1 = torch.operator "onnx.Cast"(%arg1) {torch.onnx.to = 9 : si64} : (!torch.vtensor<[?,?],i1>) -> !torch.vtensor<[?,?],i1>
%2 = torch.operator "onnx.And"(%0, %1) : (!torch.vtensor<[?,?],i1>, !torch.vtensor<[?,?],i1>) -> !torch.vtensor<[?,?],i1>
return %2 : !torch.vtensor<[?,?],i1>
}
}
====================
Torch IR
module {
func.func @main_graph(%arg0: !torch.vtensor<[?,?],i1>, %arg1: !torch.vtensor<[?,?],i1>) -> !torch.vtensor<[?,?],i1> attributes {torch.onnx_meta.ir_version = 9 : si64, torch.onnx_meta.opset_version = 20 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "2.8.0"} {
%0 = torch.aten.logical_and %arg0, %arg1 : !torch.vtensor<[?,?],i1>, !torch.vtensor<[?,?],i1> -> !torch.vtensor<[?,?],i1>
return %0 : !torch.vtensor<[?,?],i1>
}
}
====================
Torch Backend IR
module {
func.func @main_graph(%arg0: !torch.vtensor<[?,?],i1>, %arg1: !torch.vtensor<[?,?],i1>) -> !torch.vtensor<[?,?],i1> attributes {torch.onnx_meta.ir_version = 9 : si64, torch.onnx_meta.opset_version = 20 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "2.8.0"} {
%0 = torch.aten.logical_and %arg0, %arg1 : !torch.vtensor<[?,?],i1>, !torch.vtensor<[?,?],i1> -> !torch.vtensor<[?,?],i1>
return %0 : !torch.vtensor<[?,?],i1>
}
}
====================
TOSA Backend IR
module {
func.func @main_graph(%arg0: tensor<?x?xi1>, %arg1: tensor<?x?xi1>) -> tensor<?x?xi1> {
%0 = tosa.logical_and %arg0, %arg1 : (tensor<?x?xi1>, tensor<?x?xi1>) -> tensor<?x?xi1>
return %0 : tensor<?x?xi1>
}
}
error: library function required for generic lowering, but cannot be automatically inserted when operating on functions
error: failed to legalize operation 'bufferization.dealloc' that was explicitly marked illegal
TORCH_VERSION_FOR_COMPARISON = 2.8.0.dev20250325
error: library function required for generic lowering, but cannot be automatically inserted when operating on functions
error: failed to legalize operation 'bufferization.dealloc' that was explicitly marked illegal
FAIL - "ElementwiseAtenLogicalAndOpModule_basic"
Unexpected outcome summary: (onnx_tosa)
****** Failed tests - 1 tests
FAIL - "ElementwiseAtenLogicalAndOpModule_basic"
Compilation error: Traceback (most recent call last):
File "/staging/torch-mlir-github-public/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/framework.py", line 332, in compile_and_run_test
compiled = config.compile(test.program_factory(), verbose=verbose)
File "/staging/torch-mlir-github-public/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/configs/onnx_backend.py", line 147, in compile
compiled_module = self.backend.compile(backend_module)
File "/staging/torch-mlir-github-public/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/tosa_backends/linalg_on_tensors.py", line 70, in compile
return self.refbackend.compile(imported_module)
File "/staging/torch-mlir-github-public/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/linalg_on_tensors_backends/refbackend.py", line 229, in compile
run_pipeline_with_repro_report(
File "/staging/torch-mlir-github-public/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/compiler_utils.py", line 127, in run_pipeline_with_repro_report
raise TorchMlirCompilerError(trimmed_message) from None
torch_mlir.compiler_utils.TorchMlirCompilerError: Lowering Linalg-on-Tensors IR to LLVM with RefBackend failed with the following diagnostics:
python exception: Failure while executing pass pipeline
For Torch-MLIR developers, the error can be reproduced with:
$ torch-mlir-opt -pass-pipeline='builtin.module(func.func(linalg-generalize-named-ops),func.func(linalg-fuse-elementwise-ops),convert-shape-to-std,sparse-assembler{direct-out},sparsification-and-bufferization,sparse-storage-specifier-to-llvm,func.func(expand-realloc),func.func(refback-generalize-tensor-pad),func.func(refback-generalize-tensor-concat),func.func(tm-tensor-bufferize),one-shot-bufferize{copy-before-write bufferize-function-boundaries function-boundary-type-conversion=identity-layout-map},refback-mlprogram-bufferize,func.func(buffer-deallocation-pipeline),inline,refback-munge-calling-conventions,func.func(tm-tensor-to-loops),func.func(refback-munge-memref-copy),func.func(convert-linalg-to-loops),func.func(lower-affine),convert-scf-to-cf,generate-runtime-verification,func.func(refback-expand-ops-for-llvm),func.func(arith-expand),func.func(convert-math-to-llvm),convert-math-to-libm,expand-strided-metadata,finalize-memref-to-llvm,lower-affine,convert-bufferization-to-memref,finalize-memref-to-llvm,func.func(convert-arith-to-llvm),convert-vector-to-llvm,convert-func-to-llvm,convert-cf-to-llvm,convert-complex-to-llvm,reconcile-unrealized-casts)' /tmp/UnnammedModule.mlir
Add '-mlir-print-ir-after-all -mlir-disable-threading' to get the IR dump for debugging purpose.
Summary:
Failed: 1
Metadata
Metadata
Assignees
Labels
No labels