Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove redundant linalg.matmul_signed #98615

Merged
merged 1 commit into from
Jul 13, 2024
Merged

Remove redundant linalg.matmul_signed #98615

merged 1 commit into from
Jul 13, 2024

Conversation

rengolin
Copy link
Member

@rengolin rengolin commented Jul 12, 2024

linalg.matmul already has an attribute for casts, defaults to signed but allowed unsigned, so the operation linalg.matmul_unsigned is redundant. The generalization test has an example on how to lower to unsigned matmul in linalg.

This is the first PR in a list of many that will simplify the linalg operations by using similar attributes.

Ref: https://discourse.llvm.org/t/rfc-transpose-attribute-for-linalg-matmul-operations/80092

linalg.matmul already has an attribute for casts, defaults to signed but allowed unsigned, so the operation linalg.matmul_unsigned is redundant.

This is the first PR in a list of many that will simplify the linalg operations by using similar attributes.
@llvmbot
Copy link
Collaborator

llvmbot commented Jul 12, 2024

@llvm/pr-subscribers-mlir

@llvm/pr-subscribers-mlir-linalg

Author: Renato Golin (rengolin)

Changes

linalg.matmul already has an attribute for casts, defaults to signed but allowed unsigned, so the operation linalg.matmul_unsigned is redundant. The generalization test has an example on how to lower to unsigned matmul in linalg.

This is the first PR in a list of many that will simplify the linalg operations by using similar attributes.


Full diff: https://github.com/llvm/llvm-project/pull/98615.diff

3 Files Affected:

  • (modified) mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOps.yaml (-68)
  • (modified) mlir/python/mlir/dialects/linalg/opdsl/ops/core_named_ops.py (-18)
  • (modified) mlir/test/Dialect/Linalg/generalize-named-polymorphic-ops.mlir (+9-6)
diff --git a/mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOps.yaml b/mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOps.yaml
index a344add6a4e54..c31b7c4f6c108 100644
--- a/mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOps.yaml
+++ b/mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOps.yaml
@@ -1137,74 +1137,6 @@ structured_op: !LinalgStructuredOpConfig
                 - !ScalarExpression
                   scalar_arg: B
 --- !LinalgOpConfig
-metadata: !LinalgOpMetadata
-  name: matmul_unsigned
-  cpp_class_name: MatmulUnsignedOp
-  doc: |-
-    Performs an unsigned matrix multiplication of two 2D inputs.
-
-    Numeric casting is performed on the operands to the inner multiply, promoting
-    them to the same data type as the accumulator/output.
-  implements:
-  - LinalgContractionOpInterface
-structured_op: !LinalgStructuredOpConfig
-  args:
-  - !LinalgOperandDefConfig
-    name: A
-    kind: input_tensor
-    type_var: T1
-    shape_map: affine_map<()[s0, s1, s2] -> (s0, s1)>
-  - !LinalgOperandDefConfig
-    name: B
-    kind: input_tensor
-    type_var: T2
-    shape_map: affine_map<()[s0, s1, s2] -> (s1, s2)>
-  - !LinalgOperandDefConfig
-    name: C
-    kind: output_tensor
-    type_var: U
-    shape_map: affine_map<()[s0, s1, s2] -> (s0, s2)>
-  indexing_maps: !LinalgIndexingMapsConfig
-    static_indexing_maps:
-    - affine_map<(d0, d1, d2)[s0, s1, s2] -> (d0, d2)>
-    - affine_map<(d0, d1, d2)[s0, s1, s2] -> (d2, d1)>
-    - affine_map<(d0, d1, d2)[s0, s1, s2] -> (d0, d1)>
-  iterator_types:
-  - parallel
-  - parallel
-  - reduction
-  assignments:
-  - !ScalarAssign
-    arg: C
-    value: !ScalarExpression
-      scalar_fn:
-        kind: binary
-        fn_name: add
-        operands:
-        - !ScalarExpression
-          scalar_arg: C
-        - !ScalarExpression
-          scalar_fn:
-            kind: binary
-            fn_name: mul
-            operands:
-            - !ScalarExpression
-              scalar_fn:
-                kind: type
-                fn_name: cast_unsigned
-                type_var: U
-                operands:
-                - !ScalarExpression
-                  scalar_arg: A
-            - !ScalarExpression
-              scalar_fn:
-                kind: type
-                fn_name: cast_unsigned
-                type_var: U
-                operands:
-                - !ScalarExpression
-                  scalar_arg: B
---- !LinalgOpConfig
 metadata: !LinalgOpMetadata
   name: quantized_matmul
   cpp_class_name: QuantizedMatmulOp
diff --git a/mlir/python/mlir/dialects/linalg/opdsl/ops/core_named_ops.py b/mlir/python/mlir/dialects/linalg/opdsl/ops/core_named_ops.py
index cbb2d1cec103c..3ceee8e370445 100644
--- a/mlir/python/mlir/dialects/linalg/opdsl/ops/core_named_ops.py
+++ b/mlir/python/mlir/dialects/linalg/opdsl/ops/core_named_ops.py
@@ -388,24 +388,6 @@ def matmul(
     C[D.m, D.n] += cast(U, A[D.m, D.k]) * cast(U, B[D.k, D.n])
 
 
-@linalg_structured_op
-def matmul_unsigned(
-    A=TensorDef(T1, S.M, S.K),
-    B=TensorDef(T2, S.K, S.N),
-    C=TensorDef(U, S.M, S.N, output=True),
-):
-    """Performs an unsigned matrix multiplication of two 2D inputs.
-
-    Numeric casting is performed on the operands to the inner multiply, promoting
-    them to the same data type as the accumulator/output.
-    """
-    domain(D.m, D.n, D.k)
-    implements(ContractionOpInterface)
-    C[D.m, D.n] += TypeFn.cast_unsigned(U, A[D.m, D.k]) * TypeFn.cast_unsigned(
-        U, B[D.k, D.n]
-    )
-
-
 @linalg_structured_op
 def quantized_matmul(
     A=TensorDef(T1, S.M, S.K),
diff --git a/mlir/test/Dialect/Linalg/generalize-named-polymorphic-ops.mlir b/mlir/test/Dialect/Linalg/generalize-named-polymorphic-ops.mlir
index 1940a4a2912cb..c170c5be4abff 100644
--- a/mlir/test/Dialect/Linalg/generalize-named-polymorphic-ops.mlir
+++ b/mlir/test/Dialect/Linalg/generalize-named-polymorphic-ops.mlir
@@ -79,8 +79,9 @@ func.func @generalize_matmul_tensor_f16f64i32(%A : tensor<16x8xf16>, %B: tensor<
 // -----
 
 func.func @generalize_matmul_unsigned_tensor_i16i64i32(%A : tensor<16x8xi16>, %B: tensor<8x32xi64>, %C: tensor<16x32xi32>) -> tensor<16x32xi32> {
-  %0 = linalg.matmul_unsigned ins(%A, %B: tensor<16x8xi16>, tensor<8x32xi64>)
-                              outs(%C: tensor<16x32xi32>) -> tensor<16x32xi32>
+  %0 = linalg.matmul { cast = #linalg.type_fn<cast_unsigned> }
+                       ins(%A, %B: tensor<16x8xi16>, tensor<8x32xi64>)
+                       outs(%C: tensor<16x32xi32>) -> tensor<16x32xi32>
   return %0: tensor<16x32xi32>
 }
 
@@ -92,8 +93,9 @@ func.func @generalize_matmul_unsigned_tensor_i16i64i32(%A : tensor<16x8xi16>, %B
 // -----
 
 func.func @generalize_matmul_unsigned_tensor_i16i64f32(%A : tensor<16x8xi16>, %B: tensor<8x32xi64>, %C: tensor<16x32xf32>) -> tensor<16x32xf32> {
-  %0 = linalg.matmul_unsigned ins(%A, %B: tensor<16x8xi16>, tensor<8x32xi64>)
-                              outs(%C: tensor<16x32xf32>) -> tensor<16x32xf32>
+  %0 = linalg.matmul { cast = #linalg.type_fn<cast_unsigned> }
+                       ins(%A, %B: tensor<16x8xi16>, tensor<8x32xi64>)
+                       outs(%C: tensor<16x32xf32>) -> tensor<16x32xf32>
   return %0: tensor<16x32xf32>
 }
 
@@ -105,8 +107,9 @@ func.func @generalize_matmul_unsigned_tensor_i16i64f32(%A : tensor<16x8xi16>, %B
 // -----
 
 func.func @generalize_matmul_unsigned_tensor_f16f64i32(%A : tensor<16x8xf16>, %B: tensor<8x32xf64>, %C: tensor<16x32xi32>) -> tensor<16x32xi32> {
-  %0 = linalg.matmul_unsigned ins(%A, %B: tensor<16x8xf16>, tensor<8x32xf64>)
-                              outs(%C: tensor<16x32xi32>) -> tensor<16x32xi32>
+  %0 = linalg.matmul { cast = #linalg.type_fn<cast_unsigned> }
+                       ins(%A, %B: tensor<16x8xf16>, tensor<8x32xf64>)
+                       outs(%C: tensor<16x32xi32>) -> tensor<16x32xi32>
   return %0: tensor<16x32xi32>
 }
 

Copy link
Contributor

@qedawkins qedawkins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super happy to see changes like this! Perhaps give this another day for others to take a look too. It would also be nice to see how to include mixed signedness as well at some point in the future, maybe with a list of casting functions that defaults to cast_signed.

@rengolin
Copy link
Member Author

Super happy to see changes like this! Perhaps give this another day for others to take a look too. It would also be nice to see how to include mixed signedness as well at some point in the future, maybe with a list of casting functions that defaults to cast_signed.

Yes, in the long run we want to be able to mix and match. This also ties in with the transpose side, that we may want to transpose A, B or both, and not necessarily just the two inner dimensions.

Another topic I was discussing with @ftynse is that we may want to extend opdsl to allow implementation reuse. So we can just name new ops that call the existing implementation with the right attribute arguments. This may not make a lot of sense for matmuls, but it may reduce the convolution implementation footprint considerably.

Copy link
Contributor

@dcaballe dcaballe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!!

@rengolin rengolin merged commit 80e61e3 into llvm:main Jul 13, 2024
11 checks passed
@rengolin rengolin deleted the linalg branch July 13, 2024 13:46
aaryanshukla pushed a commit to aaryanshukla/llvm-project that referenced this pull request Jul 14, 2024
`linalg.matmul` already has an attribute for casts, defaults to signed
but allowed unsigned, so the operation `linalg.matmul_unsigned` is
redundant. The generalization test has an example on how to lower to
unsigned matmul in linalg.

This is the first PR in a list of many that will simplify the linalg
operations by using similar attributes.

Ref:
https://discourse.llvm.org/t/rfc-transpose-attribute-for-linalg-matmul-operations/80092
aaryanshukla pushed a commit to aaryanshukla/llvm-project that referenced this pull request Jul 16, 2024
`linalg.matmul` already has an attribute for casts, defaults to signed
but allowed unsigned, so the operation `linalg.matmul_unsigned` is
redundant. The generalization test has an example on how to lower to
unsigned matmul in linalg.

This is the first PR in a list of many that will simplify the linalg
operations by using similar attributes.

Ref:
https://discourse.llvm.org/t/rfc-transpose-attribute-for-linalg-matmul-operations/80092
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants