Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mhlo-fusion bug #12

Closed
lipracer opened this issue Aug 31, 2021 · 4 comments
Closed

mhlo-fusion bug #12

lipracer opened this issue Aug 31, 2021 · 4 comments

Comments

@lipracer
Copy link
Contributor

lipracer commented Aug 31, 2021

// CHECK-LABEL: func @elementwise_fusion
func @elementwise_fusion(%arg0: tensor<4x16xi32>, %arg1: tensor<4x16xi32>) -> tensor<2x4xi32> {
  %0 = "mhlo.add"(%arg0, %arg1) : (tensor<4x16xi32>, tensor<4x16xi32>) -> tensor<4x16xi32>
  %1 = "mhlo.subtract"(%0, %arg0) : (tensor<4x16xi32>, tensor<4x16xi32>) -> tensor<4x16xi32>
  %2 = "mhlo.slice"(%1) {limit_indices = dense<[2, 8]> : tensor<2xi64>, start_indices = dense<0> : tensor<2xi64>, strides = dense<1> : tensor<2xi64>} : (tensor<4x16xi32>) -> tensor<2x8xi32>
  %3 = "mhlo.multiply"(%0, %1) : (tensor<4x16xi32>, tensor<4x16xi32>) -> tensor<4x16xi32>
  %4 = "mhlo.slice"(%3) {limit_indices = dense<[2, 8]> : tensor<2xi64>, start_indices = dense<0> : tensor<2xi64>, strides = dense<[1, 2]> : tensor<2xi64>} : (tensor<4x16xi32>) -> tensor<2x4xi32>
  return %4 : tensor<2x4xi32>
}

Paste this paragraph after the test/ mhlo-fusion.mlir file and run ninja check-mlir-hlo
Run the IR above with mhlo-fusion to get the wrong result;
Run this section with pass alone, and the IR output is as follows:

loc("-":5:10): error: operand #0 does not dominate this use
// -----// IR Dump After MhloFusionPass Failed ('builtin.func' operation: @main) //----- //
"builtin.module"() ( {
  "builtin.func"() ( {
  ^bb0(%arg0: tensor<4x16xi32>, %arg1: tensor<4x16xi32>):  // no predecessors
    %0 = "mhlo.slice"(%1#0) {limit_indices = dense<[2, 8]> : tensor<2xi64>, start_indices = dense<0> : tensor<2xi64>, strides = dense<1> : tensor<2xi64>} : (tensor<4x16xi32>) -> tensor<2x8xi32>
    %1:2 = "mhlo.fusion"(%arg0, %arg1) ( {
      %3 = "mhlo.add"(%arg0, %arg1) : (tensor<4x16xi32>, tensor<4x16xi32>) -> tensor<4x16xi32>
      %4 = "mhlo.subtract"(%3, %arg0) : (tensor<4x16xi32>, tensor<4x16xi32>) -> tensor<4x16xi32>
      %5 = "mhlo.multiply"(%3, %4) : (tensor<4x16xi32>, tensor<4x16xi32>) -> tensor<4x16xi32>
      "mhlo.return"(%4, %5) : (tensor<4x16xi32>, tensor<4x16xi32>) -> ()
    }) : (tensor<4x16xi32>, tensor<4x16xi32>) -> (tensor<4x16xi32>, tensor<4x16xi32>)
    %2 = "mhlo.slice"(%1#1) {limit_indices = dense<[2, 8]> : tensor<2xi64>, start_indices = dense<0> : tensor<2xi64>, strides = dense<[1, 2]> : tensor<2xi64>} : (tensor<4x16xi32>) -> tensor<2x4xi32>
    "std.return"(%2) : (tensor<2x4xi32>) -> ()
  }) {sym_name = "main", type = (tensor<4x16xi32>, tensor<4x16xi32>) -> tensor<2x4xi32>} : () -> ()
}) : () -> ()

%0 = "mhlo.slice"(%1#0) should follow by fusion op

Opbuilder is created by this statement OpBuilder b(pattern.back());. Fusion OP is inserted after all fused ops, need to move others consumers between fused ops by post order.

@joker-eph
Copy link
Contributor

Thanks for the bug report! Are you interested to submit a patch for fixing this maybe?

@lipracer
Copy link
Contributor Author

lipracer commented Sep 1, 2021

Ok,

@lipracer
Copy link
Contributor Author

lipracer commented Sep 1, 2021

#14
My first submission in the open source community. Forgive me for my shortcomings.

@lipracer
Copy link
Contributor Author

lipracer commented Sep 4, 2021

Fixed 14ddf54da51879e031507d34d37bae0923dc58a9

@lipracer lipracer closed this as completed Sep 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants