-
Notifications
You must be signed in to change notification settings - Fork 15k
[mlir][vector][nfc] Update tests for folding mem operations #164255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir][vector][nfc] Update tests for folding mem operations #164255
Conversation
Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders
for:
* vector.maskedload, vector.maskedstore, vector.scatter,
vector.gather, vector.compressstore, vector.expandload.
This patch renames and documents these tests in accordance with:
* https://mlir.llvm.org/getting_started/TestingGuide/
Note: the updated tests are referenced in the Test Formatting Best
Practices section of the MLIR testing guide:
* https://mlir.llvm.org/getting_started/TestingGuide/#test-formatting-best-practices
Keeping them aligned with the guidelines ensures consistency and clarity
across MLIR’s test suite.
|
@llvm/pr-subscribers-mlir-vector @llvm/pr-subscribers-mlir Author: Andrzej Warzyński (banach-space) ChangesTests in "fold_maskedload_to_load_all_true_dynamic" excercise folders
This patch renames and documents these tests in accordance with: Note: the updated tests are referenced in the Test Formatting Best Keeping them aligned with the guidelines ensures consistency and clarity Full diff: https://github.com/llvm/llvm-project/pull/164255.diff 1 Files Affected:
diff --git a/mlir/test/Dialect/Vector/vector-mem-transforms.mlir b/mlir/test/Dialect/Vector/vector-mem-transforms.mlir
index e6593320f1bde..2004a47851e2e 100644
--- a/mlir/test/Dialect/Vector/vector-mem-transforms.mlir
+++ b/mlir/test/Dialect/Vector/vector-mem-transforms.mlir
@@ -1,12 +1,16 @@
// RUN: mlir-opt %s -test-vector-to-vector-lowering | FileCheck %s
-// CHECK-LABEL: func @maskedload0(
-// CHECK-SAME: %[[A0:.*]]: memref<?xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-DAG: %[[C:.*]] = arith.constant 0 : index
-// CHECK-NEXT: %[[T:.*]] = vector.load %[[A0]][%[[C]]] : memref<?xf32>, vector<16xf32>
-// CHECK-NEXT: return %[[T]] : vector<16xf32>
-func.func @maskedload0(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+//-----------------------------------------------------------------------------
+// [Pattern: MaskedLoadFolder]
+//-----------------------------------------------------------------------------
+
+// CHECK-LABEL: func @fold_maskedload_all_true_dynamic(
+// CHECK-SAME: %[[BASE:.*]]: memref<?xf32>,
+// CHECK-SAME: %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-DAG: %[[IDX:.*]] = arith.constant 0 : index
+// CHECK-NEXT: %[[LOAD:.*]] = vector.load %[[BASE]][%[[IDX]]] : memref<?xf32>, vector<16xf32>
+// CHECK-NEXT: return %[[LOAD]] : vector<16xf32>
+func.func @fold_maskedload_all_true_dynamic(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [16] : vector<16xi1>
%ld = vector.maskedload %base[%c0], %mask, %pass_thru
@@ -14,13 +18,13 @@ func.func @maskedload0(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vect
return %ld : vector<16xf32>
}
-// CHECK-LABEL: func @maskedload1(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-DAG: %[[C:.*]] = arith.constant 0 : index
-// CHECK-NEXT: %[[T:.*]] = vector.load %[[A0]][%[[C]]] : memref<16xf32>, vector<16xf32>
-// CHECK-NEXT: return %[[T]] : vector<16xf32>
-func.func @maskedload1(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+// CHECK-LABEL: func @fold_maskedload_all_true_static(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-DAG: %[[IDX:.*]] = arith.constant 0 : index
+// CHECK-NEXT: %[[LOAD:.*]] = vector.load %[[BASE]][%[[IDX]]] : memref<16xf32>, vector<16xf32>
+// CHECK-NEXT: return %[[LOAD]] : vector<16xf32>
+func.func @fold_maskedload_all_true_static(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [16] : vector<16xi1>
%ld = vector.maskedload %base[%c0], %mask, %pass_thru
@@ -28,11 +32,11 @@ func.func @maskedload1(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vec
return %ld : vector<16xf32>
}
-// CHECK-LABEL: func @maskedload2(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-NEXT: return %[[A1]] : vector<16xf32>
-func.func @maskedload2(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+// CHECK-LABEL: func @fold_maskedload_all_false_static(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-NEXT: return %[[PASS_THRU]] : vector<16xf32>
+func.func @fold_maskedload_all_false_static(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [0] : vector<16xi1>
%ld = vector.maskedload %base[%c0], %mask, %pass_thru
@@ -40,13 +44,13 @@ func.func @maskedload2(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vec
return %ld : vector<16xf32>
}
-// CHECK-LABEL: func @maskedload3(
-// CHECK-SAME: %[[A0:.*]]: memref<?xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-DAG: %[[C:.*]] = arith.constant 8 : index
-// CHECK-NEXT: %[[T:.*]] = vector.load %[[A0]][%[[C]]] : memref<?xf32>, vector<16xf32>
-// CHECK-NEXT: return %[[T]] : vector<16xf32>
-func.func @maskedload3(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+// CHECK-LABEL: func @fold_maskedload_dynamic_non_zero_idx(
+// CHECK-SAME: %[[BASE:.*]]: memref<?xf32>,
+// CHECK-SAME: %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-DAG: %[[IDX:.*]] = arith.constant 8 : index
+// CHECK-NEXT: %[[LOAD:.*]] = vector.load %[[BASE]][%[[IDX]]] : memref<?xf32>, vector<16xf32>
+// CHECK-NEXT: return %[[LOAD]] : vector<16xf32>
+func.func @fold_maskedload_dynamic_non_zero_idx(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
%c8 = arith.constant 8 : index
%mask = vector.constant_mask [16] : vector<16xi1>
%ld = vector.maskedload %base[%c8], %mask, %pass_thru
@@ -54,39 +58,49 @@ func.func @maskedload3(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vect
return %ld : vector<16xf32>
}
-// CHECK-LABEL: func @maskedstore1(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xf32>) {
-// CHECK-NEXT: %[[C:.*]] = arith.constant 0 : index
-// CHECK-NEXT: vector.store %[[A1]], %[[A0]][%[[C]]] : memref<16xf32>, vector<16xf32>
+//-----------------------------------------------------------------------------
+// [Pattern: MaskedStoreFolder]
+//-----------------------------------------------------------------------------
+
+// CHECK-LABEL: func @fold_maskedstore_all_true(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[VALUE:.*]]: vector<16xf32>) {
+// CHECK-NEXT: %[[IDX:.*]] = arith.constant 0 : index
+// CHECK-NEXT: vector.store %[[VALUE]], %[[BASE]][%[[IDX]]] : memref<16xf32>, vector<16xf32>
// CHECK-NEXT: return
-func.func @maskedstore1(%base: memref<16xf32>, %value: vector<16xf32>) {
+func.func @fold_maskedstore_all_true(%base: memref<16xf32>, %value: vector<16xf32>) {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [16] : vector<16xi1>
vector.maskedstore %base[%c0], %mask, %value : memref<16xf32>, vector<16xi1>, vector<16xf32>
return
}
-// CHECK-LABEL: func @maskedstore2(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xf32>) {
+// CHECK-LABEL: func @fold_maskedstore_all_false(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[VALUE:.*]]: vector<16xf32>) {
// CHECK-NEXT: return
-func.func @maskedstore2(%base: memref<16xf32>, %value: vector<16xf32>) {
+func.func @fold_maskedstore_all_false(%base: memref<16xf32>, %value: vector<16xf32>) {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [0] : vector<16xi1>
vector.maskedstore %base[%c0], %mask, %value : memref<16xf32>, vector<16xi1>, vector<16xf32>
return
}
-// CHECK-LABEL: func @gather1(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xi32>,
-// CHECK-SAME: %[[A2:.*]]: vector<16xf32>) -> vector<16xf32> {
+//-----------------------------------------------------------------------------
+// [Pattern: GatherFolder]
+//-----------------------------------------------------------------------------
+
+/// There is no alternative (i.e. simpler) Op for this, hence no-fold.
+
+// CHECK-LABEL: func @no_fold_gather_all_true(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[INDICES:.*]]: vector<16xi32>,
+// CHECK-SAME: %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
// CHECK-NEXT: %[[C:.*]] = arith.constant 0 : index
// CHECK-NEXT: %[[M:.*]] = arith.constant dense<true> : vector<16xi1>
-// CHECK-NEXT: %[[G:.*]] = vector.gather %[[A0]][%[[C]]] [%[[A1]]], %[[M]], %[[A2]] : memref<16xf32>, vector<16xi32>, vector<16xi1>, vector<16xf32> into vector<16xf32>
+// CHECK-NEXT: %[[G:.*]] = vector.gather %[[BASE]][%[[C]]] [%[[INDICES]]], %[[M]], %[[PASS_THRU]] : memref<16xf32>, vector<16xi32>, vector<16xi1>, vector<16xf32> into vector<16xf32>
// CHECK-NEXT: return %[[G]] : vector<16xf32>
-func.func @gather1(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+func.func @no_fold_gather_all_true(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [16] : vector<16xi1>
%ld = vector.gather %base[%c0][%indices], %mask, %pass_thru
@@ -94,12 +108,12 @@ func.func @gather1(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru:
return %ld : vector<16xf32>
}
-// CHECK-LABEL: func @gather2(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xi32>,
-// CHECK-SAME: %[[A2:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-NEXT: return %[[A2]] : vector<16xf32>
-func.func @gather2(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+// CHECK-LABEL: func @fold_gather_all_true(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[INDICES:.*]]: vector<16xi32>,
+// CHECK-SAME: %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-NEXT: return %[[PASS_THRU]] : vector<16xf32>
+func.func @fold_gather_all_true(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [0] : vector<16xi1>
%ld = vector.gather %base[%c0][%indices], %mask, %pass_thru
@@ -107,15 +121,21 @@ func.func @gather2(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru:
return %ld : vector<16xf32>
}
-// CHECK-LABEL: func @scatter1(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xi32>,
-// CHECK-SAME: %[[A2:.*]]: vector<16xf32>) {
+//-----------------------------------------------------------------------------
+// [Pattern: ScatterFolder]
+//-----------------------------------------------------------------------------
+
+/// There is no alternative (i.e. simpler) Op for this, hence no-fold.
+
+// CHECK-LABEL: func @no_fold_scatter_all_true(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[INDICES:.*]]: vector<16xi32>,
+// CHECK-SAME: %[[VALUE:.*]]: vector<16xf32>) {
// CHECK-NEXT: %[[C:.*]] = arith.constant 0 : index
// CHECK-NEXT: %[[M:.*]] = arith.constant dense<true> : vector<16xi1>
-// CHECK-NEXT: vector.scatter %[[A0]][%[[C]]] [%[[A1]]], %[[M]], %[[A2]] : memref<16xf32>, vector<16xi32>, vector<16xi1>, vector<16xf32>
+// CHECK-NEXT: vector.scatter %[[BASE]][%[[C]]] [%[[INDICES]]], %[[M]], %[[VALUE]] : memref<16xf32>, vector<16xi32>, vector<16xi1>, vector<16xf32>
// CHECK-NEXT: return
-func.func @scatter1(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vector<16xf32>) {
+func.func @no_fold_scatter_all_true(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vector<16xf32>) {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [16] : vector<16xi1>
vector.scatter %base[%c0][%indices], %mask, %value
@@ -123,12 +143,12 @@ func.func @scatter1(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vec
return
}
-// CHECK-LABEL: func @scatter2(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xi32>,
-// CHECK-SAME: %[[A2:.*]]: vector<16xf32>) {
+// CHECK-LABEL: func @fold_scatter_all_false(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[INDICES:.*]]: vector<16xi32>,
+// CHECK-SAME: %[[VALUE:.*]]: vector<16xf32>) {
// CHECK-NEXT: return
-func.func @scatter2(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vector<16xf32>) {
+func.func @fold_scatter_all_false(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vector<16xf32>) {
%c0 = arith.constant 0 : index
%0 = vector.type_cast %base : memref<16xf32> to memref<vector<16xf32>>
%mask = vector.constant_mask [0] : vector<16xi1>
@@ -137,13 +157,17 @@ func.func @scatter2(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vec
return
}
-// CHECK-LABEL: func @expand1(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
+//-----------------------------------------------------------------------------
+// [Pattern: ExpandLoadFolder]
+//-----------------------------------------------------------------------------
+
+// CHECK-LABEL: func @fold_expandload_all_true(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
// CHECK-DAG: %[[C:.*]] = arith.constant 0 : index
-// CHECK-NEXT: %[[T:.*]] = vector.load %[[A0]][%[[C]]] : memref<16xf32>, vector<16xf32>
+// CHECK-NEXT: %[[T:.*]] = vector.load %[[BASE]][%[[C]]] : memref<16xf32>, vector<16xf32>
// CHECK-NEXT: return %[[T]] : vector<16xf32>
-func.func @expand1(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+func.func @fold_expandload_all_true(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [16] : vector<16xi1>
%ld = vector.expandload %base[%c0], %mask, %pass_thru
@@ -151,11 +175,11 @@ func.func @expand1(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<
return %ld : vector<16xf32>
}
-// CHECK-LABEL: func @expand2(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-NEXT: return %[[A1]] : vector<16xf32>
-func.func @expand2(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+// CHECK-LABEL: func @fold_expandload_all_false(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-NEXT: return %[[PASS_THRU]] : vector<16xf32>
+func.func @fold_expandload_all_false(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [0] : vector<16xi1>
%ld = vector.expandload %base[%c0], %mask, %pass_thru
@@ -163,24 +187,28 @@ func.func @expand2(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<
return %ld : vector<16xf32>
}
-// CHECK-LABEL: func @compress1(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xf32>) {
+//-----------------------------------------------------------------------------
+// [Pattern: CompressStoreFolder]
+//-----------------------------------------------------------------------------
+
+// CHECK-LABEL: func @fold_compressstore_all_true(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[VALUE:.*]]: vector<16xf32>) {
// CHECK-NEXT: %[[C:.*]] = arith.constant 0 : index
-// CHECK-NEXT: vector.store %[[A1]], %[[A0]][%[[C]]] : memref<16xf32>, vector<16xf32>
+// CHECK-NEXT: vector.store %[[VALUE]], %[[BASE]][%[[C]]] : memref<16xf32>, vector<16xf32>
// CHECK-NEXT: return
-func.func @compress1(%base: memref<16xf32>, %value: vector<16xf32>) {
+func.func @fold_compressstore_all_true(%base: memref<16xf32>, %value: vector<16xf32>) {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [16] : vector<16xi1>
vector.compressstore %base[%c0], %mask, %value : memref<16xf32>, vector<16xi1>, vector<16xf32>
return
}
-// CHECK-LABEL: func @compress2(
-// CHECK-SAME: %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME: %[[A1:.*]]: vector<16xf32>) {
+// CHECK-LABEL: func @fold_compressstore_all_false(
+// CHECK-SAME: %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME: %[[VALUE:.*]]: vector<16xf32>) {
// CHECK-NEXT: return
-func.func @compress2(%base: memref<16xf32>, %value: vector<16xf32>) {
+func.func @fold_compressstore_all_false(%base: memref<16xf32>, %value: vector<16xf32>) {
%c0 = arith.constant 0 : index
%mask = vector.constant_mask [0] : vector<16xi1>
vector.compressstore %base[%c0], %mask, %value : memref<16xf32>, vector<16xi1>, vector<16xf32>
|
) Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders for: * vector.maskedload, vector.maskedstore, vector.scatter, vector.gather, vector.compressstore, vector.expandload. This patch renames and documents these tests in accordance with: * https://mlir.llvm.org/getting_started/TestingGuide/ Note: the updated tests are referenced in the Test Formatting Best Practices section of the MLIR testing guide: * https://mlir.llvm.org/getting_started/TestingGuide/#test-formatting-best-practices Keeping them aligned with the guidelines ensures consistency and clarity across MLIR’s test suite.
) Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders for: * vector.maskedload, vector.maskedstore, vector.scatter, vector.gather, vector.compressstore, vector.expandload. This patch renames and documents these tests in accordance with: * https://mlir.llvm.org/getting_started/TestingGuide/ Note: the updated tests are referenced in the Test Formatting Best Practices section of the MLIR testing guide: * https://mlir.llvm.org/getting_started/TestingGuide/#test-formatting-best-practices Keeping them aligned with the guidelines ensures consistency and clarity across MLIR’s test suite.
) Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders for: * vector.maskedload, vector.maskedstore, vector.scatter, vector.gather, vector.compressstore, vector.expandload. This patch renames and documents these tests in accordance with: * https://mlir.llvm.org/getting_started/TestingGuide/ Note: the updated tests are referenced in the Test Formatting Best Practices section of the MLIR testing guide: * https://mlir.llvm.org/getting_started/TestingGuide/#test-formatting-best-practices Keeping them aligned with the guidelines ensures consistency and clarity across MLIR’s test suite.
Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders
for:
vector.gather, vector.compressstore, vector.expandload.
This patch renames and documents these tests in accordance with:
Note: the updated tests are referenced in the Test Formatting Best
Practices section of the MLIR testing guide:
Keeping them aligned with the guidelines ensures consistency and clarity
across MLIR’s test suite.