Skip to content

Conversation

@banach-space
Copy link
Contributor

Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders
for:

  • vector.maskedload, vector.maskedstore, vector.scatter,
    vector.gather, vector.compressstore, vector.expandload.

This patch renames and documents these tests in accordance with:

Note: the updated tests are referenced in the Test Formatting Best
Practices section of the MLIR testing guide:

Keeping them aligned with the guidelines ensures consistency and clarity
across MLIR’s test suite.

Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders
for:
  * vector.maskedload, vector.maskedstore, vector.scatter,
    vector.gather, vector.compressstore, vector.expandload.

This patch renames and documents these tests in accordance with:
  * https://mlir.llvm.org/getting_started/TestingGuide/

Note: the updated tests are referenced in the Test Formatting Best
Practices section of the MLIR testing guide:
  * https://mlir.llvm.org/getting_started/TestingGuide/#test-formatting-best-practices

Keeping them aligned with the guidelines ensures consistency and clarity
across MLIR’s test suite.
@llvmbot
Copy link
Member

llvmbot commented Oct 20, 2025

@llvm/pr-subscribers-mlir-vector

@llvm/pr-subscribers-mlir

Author: Andrzej Warzyński (banach-space)

Changes

Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders
for:

  • vector.maskedload, vector.maskedstore, vector.scatter,
    vector.gather, vector.compressstore, vector.expandload.

This patch renames and documents these tests in accordance with:

Note: the updated tests are referenced in the Test Formatting Best
Practices section of the MLIR testing guide:

Keeping them aligned with the guidelines ensures consistency and clarity
across MLIR’s test suite.


Full diff: https://github.com/llvm/llvm-project/pull/164255.diff

1 Files Affected:

  • (modified) mlir/test/Dialect/Vector/vector-mem-transforms.mlir (+106-78)
diff --git a/mlir/test/Dialect/Vector/vector-mem-transforms.mlir b/mlir/test/Dialect/Vector/vector-mem-transforms.mlir
index e6593320f1bde..2004a47851e2e 100644
--- a/mlir/test/Dialect/Vector/vector-mem-transforms.mlir
+++ b/mlir/test/Dialect/Vector/vector-mem-transforms.mlir
@@ -1,12 +1,16 @@
 // RUN: mlir-opt %s -test-vector-to-vector-lowering | FileCheck %s
 
-// CHECK-LABEL:   func @maskedload0(
-// CHECK-SAME:                      %[[A0:.*]]: memref<?xf32>,
-// CHECK-SAME:                      %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-DAG:       %[[C:.*]] = arith.constant 0 : index
-// CHECK-NEXT:      %[[T:.*]] = vector.load %[[A0]][%[[C]]] : memref<?xf32>, vector<16xf32>
-// CHECK-NEXT:      return %[[T]] : vector<16xf32>
-func.func @maskedload0(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+//-----------------------------------------------------------------------------
+// [Pattern: MaskedLoadFolder]
+//-----------------------------------------------------------------------------
+
+// CHECK-LABEL:   func @fold_maskedload_all_true_dynamic(
+// CHECK-SAME:                      %[[BASE:.*]]: memref<?xf32>,
+// CHECK-SAME:                      %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-DAG:       %[[IDX:.*]] = arith.constant 0 : index
+// CHECK-NEXT:      %[[LOAD:.*]] = vector.load %[[BASE]][%[[IDX]]] : memref<?xf32>, vector<16xf32>
+// CHECK-NEXT:      return %[[LOAD]] : vector<16xf32>
+func.func @fold_maskedload_all_true_dynamic(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [16] : vector<16xi1>
   %ld = vector.maskedload %base[%c0], %mask, %pass_thru
@@ -14,13 +18,13 @@ func.func @maskedload0(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vect
   return %ld : vector<16xf32>
 }
 
-// CHECK-LABEL:   func @maskedload1(
-// CHECK-SAME:                      %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                      %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-DAG:       %[[C:.*]] = arith.constant 0 : index
-// CHECK-NEXT:      %[[T:.*]] = vector.load %[[A0]][%[[C]]] : memref<16xf32>, vector<16xf32>
-// CHECK-NEXT:      return %[[T]] : vector<16xf32>
-func.func @maskedload1(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+// CHECK-LABEL:   func @fold_maskedload_all_true_static(
+// CHECK-SAME:                      %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                      %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-DAG:       %[[IDX:.*]] = arith.constant 0 : index
+// CHECK-NEXT:      %[[LOAD:.*]] = vector.load %[[BASE]][%[[IDX]]] : memref<16xf32>, vector<16xf32>
+// CHECK-NEXT:      return %[[LOAD]] : vector<16xf32>
+func.func @fold_maskedload_all_true_static(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [16] : vector<16xi1>
   %ld = vector.maskedload %base[%c0], %mask, %pass_thru
@@ -28,11 +32,11 @@ func.func @maskedload1(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vec
   return %ld : vector<16xf32>
 }
 
-// CHECK-LABEL:   func @maskedload2(
-// CHECK-SAME:                      %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                      %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-NEXT:      return %[[A1]] : vector<16xf32>
-func.func @maskedload2(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+// CHECK-LABEL:   func @fold_maskedload_all_false_static(
+// CHECK-SAME:                      %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                      %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-NEXT:      return %[[PASS_THRU]] : vector<16xf32>
+func.func @fold_maskedload_all_false_static(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [0] : vector<16xi1>
   %ld = vector.maskedload %base[%c0], %mask, %pass_thru
@@ -40,13 +44,13 @@ func.func @maskedload2(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vec
   return %ld : vector<16xf32>
 }
 
-// CHECK-LABEL:   func @maskedload3(
-// CHECK-SAME:                      %[[A0:.*]]: memref<?xf32>,
-// CHECK-SAME:                      %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-DAG:       %[[C:.*]] = arith.constant 8 : index
-// CHECK-NEXT:      %[[T:.*]] = vector.load %[[A0]][%[[C]]] : memref<?xf32>, vector<16xf32>
-// CHECK-NEXT:      return %[[T]] : vector<16xf32>
-func.func @maskedload3(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+// CHECK-LABEL:   func @fold_maskedload_dynamic_non_zero_idx(
+// CHECK-SAME:                      %[[BASE:.*]]: memref<?xf32>,
+// CHECK-SAME:                      %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-DAG:       %[[IDX:.*]] = arith.constant 8 : index
+// CHECK-NEXT:      %[[LOAD:.*]] = vector.load %[[BASE]][%[[IDX]]] : memref<?xf32>, vector<16xf32>
+// CHECK-NEXT:      return %[[LOAD]] : vector<16xf32>
+func.func @fold_maskedload_dynamic_non_zero_idx(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
   %c8 = arith.constant 8 : index
   %mask = vector.constant_mask [16] : vector<16xi1>
   %ld = vector.maskedload %base[%c8], %mask, %pass_thru
@@ -54,39 +58,49 @@ func.func @maskedload3(%base: memref<?xf32>, %pass_thru: vector<16xf32>) -> vect
   return %ld : vector<16xf32>
 }
 
-// CHECK-LABEL:   func @maskedstore1(
-// CHECK-SAME:                       %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                       %[[A1:.*]]: vector<16xf32>) {
-// CHECK-NEXT:      %[[C:.*]] = arith.constant 0 : index
-// CHECK-NEXT:      vector.store %[[A1]], %[[A0]][%[[C]]] : memref<16xf32>, vector<16xf32>
+//-----------------------------------------------------------------------------
+// [Pattern: MaskedStoreFolder]
+//-----------------------------------------------------------------------------
+
+// CHECK-LABEL:   func @fold_maskedstore_all_true(
+// CHECK-SAME:                       %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                       %[[VALUE:.*]]: vector<16xf32>) {
+// CHECK-NEXT:      %[[IDX:.*]] = arith.constant 0 : index
+// CHECK-NEXT:      vector.store %[[VALUE]], %[[BASE]][%[[IDX]]] : memref<16xf32>, vector<16xf32>
 // CHECK-NEXT:      return
-func.func @maskedstore1(%base: memref<16xf32>, %value: vector<16xf32>) {
+func.func @fold_maskedstore_all_true(%base: memref<16xf32>, %value: vector<16xf32>) {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [16] : vector<16xi1>
   vector.maskedstore %base[%c0], %mask, %value : memref<16xf32>, vector<16xi1>, vector<16xf32>
   return
 }
 
-// CHECK-LABEL:   func @maskedstore2(
-// CHECK-SAME:                       %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                       %[[A1:.*]]: vector<16xf32>) {
+// CHECK-LABEL:   func @fold_maskedstore_all_false(
+// CHECK-SAME:                       %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                       %[[VALUE:.*]]: vector<16xf32>) {
 // CHECK-NEXT:      return
-func.func @maskedstore2(%base: memref<16xf32>, %value: vector<16xf32>)  {
+func.func @fold_maskedstore_all_false(%base: memref<16xf32>, %value: vector<16xf32>)  {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [0] : vector<16xi1>
   vector.maskedstore %base[%c0], %mask, %value : memref<16xf32>, vector<16xi1>, vector<16xf32>
   return
 }
 
-// CHECK-LABEL:   func @gather1(
-// CHECK-SAME:                  %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                  %[[A1:.*]]: vector<16xi32>,
-// CHECK-SAME:                  %[[A2:.*]]: vector<16xf32>) -> vector<16xf32> {
+//-----------------------------------------------------------------------------
+// [Pattern: GatherFolder]
+//-----------------------------------------------------------------------------
+
+/// There is no alternative (i.e. simpler) Op for this, hence no-fold.
+
+// CHECK-LABEL:   func @no_fold_gather_all_true(
+// CHECK-SAME:                  %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                  %[[INDICES:.*]]: vector<16xi32>,
+// CHECK-SAME:                  %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
 // CHECK-NEXT:      %[[C:.*]] = arith.constant 0 : index
 // CHECK-NEXT:      %[[M:.*]] = arith.constant dense<true> : vector<16xi1>
-// CHECK-NEXT:      %[[G:.*]] = vector.gather %[[A0]][%[[C]]] [%[[A1]]], %[[M]], %[[A2]] : memref<16xf32>, vector<16xi32>, vector<16xi1>, vector<16xf32> into vector<16xf32>
+// CHECK-NEXT:      %[[G:.*]] = vector.gather %[[BASE]][%[[C]]] [%[[INDICES]]], %[[M]], %[[PASS_THRU]] : memref<16xf32>, vector<16xi32>, vector<16xi1>, vector<16xf32> into vector<16xf32>
 // CHECK-NEXT:      return %[[G]] : vector<16xf32>
-func.func @gather1(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+func.func @no_fold_gather_all_true(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [16] : vector<16xi1>
   %ld = vector.gather %base[%c0][%indices], %mask, %pass_thru
@@ -94,12 +108,12 @@ func.func @gather1(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru:
   return %ld : vector<16xf32>
 }
 
-// CHECK-LABEL:   func @gather2(
-// CHECK-SAME:                  %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                  %[[A1:.*]]: vector<16xi32>,
-// CHECK-SAME:                  %[[A2:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-NEXT:      return %[[A2]] : vector<16xf32>
-func.func @gather2(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+// CHECK-LABEL:   func @fold_gather_all_true(
+// CHECK-SAME:                  %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                  %[[INDICES:.*]]: vector<16xi32>,
+// CHECK-SAME:                  %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-NEXT:      return %[[PASS_THRU]] : vector<16xf32>
+func.func @fold_gather_all_true(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [0] : vector<16xi1>
   %ld = vector.gather %base[%c0][%indices], %mask, %pass_thru
@@ -107,15 +121,21 @@ func.func @gather2(%base: memref<16xf32>, %indices: vector<16xi32>, %pass_thru:
   return %ld : vector<16xf32>
 }
 
-// CHECK-LABEL:   func @scatter1(
-// CHECK-SAME:                   %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                   %[[A1:.*]]: vector<16xi32>,
-// CHECK-SAME:                   %[[A2:.*]]: vector<16xf32>) {
+//-----------------------------------------------------------------------------
+// [Pattern: ScatterFolder]
+//-----------------------------------------------------------------------------
+
+/// There is no alternative (i.e. simpler) Op for this, hence no-fold.
+
+// CHECK-LABEL:   func @no_fold_scatter_all_true(
+// CHECK-SAME:                   %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                   %[[INDICES:.*]]: vector<16xi32>,
+// CHECK-SAME:                   %[[VALUE:.*]]: vector<16xf32>) {
 // CHECK-NEXT:      %[[C:.*]] = arith.constant 0 : index
 // CHECK-NEXT:      %[[M:.*]] = arith.constant dense<true> : vector<16xi1>
-// CHECK-NEXT:      vector.scatter %[[A0]][%[[C]]] [%[[A1]]], %[[M]], %[[A2]] : memref<16xf32>, vector<16xi32>, vector<16xi1>, vector<16xf32>
+// CHECK-NEXT:      vector.scatter %[[BASE]][%[[C]]] [%[[INDICES]]], %[[M]], %[[VALUE]] : memref<16xf32>, vector<16xi32>, vector<16xi1>, vector<16xf32>
 // CHECK-NEXT:      return
-func.func @scatter1(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vector<16xf32>) {
+func.func @no_fold_scatter_all_true(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vector<16xf32>) {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [16] : vector<16xi1>
   vector.scatter %base[%c0][%indices], %mask, %value
@@ -123,12 +143,12 @@ func.func @scatter1(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vec
   return
 }
 
-// CHECK-LABEL:   func @scatter2(
-// CHECK-SAME:                   %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                   %[[A1:.*]]: vector<16xi32>,
-// CHECK-SAME:                   %[[A2:.*]]: vector<16xf32>) {
+// CHECK-LABEL:   func @fold_scatter_all_false(
+// CHECK-SAME:                   %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                   %[[INDICES:.*]]: vector<16xi32>,
+// CHECK-SAME:                   %[[VALUE:.*]]: vector<16xf32>) {
 // CHECK-NEXT:      return
-func.func @scatter2(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vector<16xf32>) {
+func.func @fold_scatter_all_false(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vector<16xf32>) {
   %c0 = arith.constant 0 : index
   %0 = vector.type_cast %base : memref<16xf32> to memref<vector<16xf32>>
   %mask = vector.constant_mask [0] : vector<16xi1>
@@ -137,13 +157,17 @@ func.func @scatter2(%base: memref<16xf32>, %indices: vector<16xi32>, %value: vec
   return
 }
 
-// CHECK-LABEL:   func @expand1(
-// CHECK-SAME:                  %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                  %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
+//-----------------------------------------------------------------------------
+// [Pattern: ExpandLoadFolder]
+//-----------------------------------------------------------------------------
+
+// CHECK-LABEL:   func @fold_expandload_all_true(
+// CHECK-SAME:                  %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                  %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
 // CHECK-DAG:       %[[C:.*]] = arith.constant 0 : index
-// CHECK-NEXT:      %[[T:.*]] = vector.load %[[A0]][%[[C]]] : memref<16xf32>, vector<16xf32>
+// CHECK-NEXT:      %[[T:.*]] = vector.load %[[BASE]][%[[C]]] : memref<16xf32>, vector<16xf32>
 // CHECK-NEXT:      return %[[T]] : vector<16xf32>
-func.func @expand1(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+func.func @fold_expandload_all_true(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [16] : vector<16xi1>
   %ld = vector.expandload %base[%c0], %mask, %pass_thru
@@ -151,11 +175,11 @@ func.func @expand1(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<
   return %ld : vector<16xf32>
 }
 
-// CHECK-LABEL:   func @expand2(
-// CHECK-SAME:                  %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                  %[[A1:.*]]: vector<16xf32>) -> vector<16xf32> {
-// CHECK-NEXT:      return %[[A1]] : vector<16xf32>
-func.func @expand2(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
+// CHECK-LABEL:   func @fold_expandload_all_false(
+// CHECK-SAME:                  %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                  %[[PASS_THRU:.*]]: vector<16xf32>) -> vector<16xf32> {
+// CHECK-NEXT:      return %[[PASS_THRU]] : vector<16xf32>
+func.func @fold_expandload_all_false(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<16xf32> {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [0] : vector<16xi1>
   %ld = vector.expandload %base[%c0], %mask, %pass_thru
@@ -163,24 +187,28 @@ func.func @expand2(%base: memref<16xf32>, %pass_thru: vector<16xf32>) -> vector<
   return %ld : vector<16xf32>
 }
 
-// CHECK-LABEL:   func @compress1(
-// CHECK-SAME:                    %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                    %[[A1:.*]]: vector<16xf32>) {
+//-----------------------------------------------------------------------------
+// [Pattern: CompressStoreFolder]
+//-----------------------------------------------------------------------------
+
+// CHECK-LABEL:   func @fold_compressstore_all_true(
+// CHECK-SAME:                    %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                    %[[VALUE:.*]]: vector<16xf32>) {
 // CHECK-NEXT:      %[[C:.*]] = arith.constant 0 : index
-// CHECK-NEXT:      vector.store %[[A1]], %[[A0]][%[[C]]] : memref<16xf32>, vector<16xf32>
+// CHECK-NEXT:      vector.store %[[VALUE]], %[[BASE]][%[[C]]] : memref<16xf32>, vector<16xf32>
 // CHECK-NEXT:      return
-func.func @compress1(%base: memref<16xf32>, %value: vector<16xf32>) {
+func.func @fold_compressstore_all_true(%base: memref<16xf32>, %value: vector<16xf32>) {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [16] : vector<16xi1>
   vector.compressstore %base[%c0], %mask, %value  : memref<16xf32>, vector<16xi1>, vector<16xf32>
   return
 }
 
-// CHECK-LABEL:   func @compress2(
-// CHECK-SAME:                    %[[A0:.*]]: memref<16xf32>,
-// CHECK-SAME:                    %[[A1:.*]]: vector<16xf32>) {
+// CHECK-LABEL:   func @fold_compressstore_all_false(
+// CHECK-SAME:                    %[[BASE:.*]]: memref<16xf32>,
+// CHECK-SAME:                    %[[VALUE:.*]]: vector<16xf32>) {
 // CHECK-NEXT:      return
-func.func @compress2(%base: memref<16xf32>, %value: vector<16xf32>) {
+func.func @fold_compressstore_all_false(%base: memref<16xf32>, %value: vector<16xf32>) {
   %c0 = arith.constant 0 : index
   %mask = vector.constant_mask [0] : vector<16xi1>
   vector.compressstore %base[%c0], %mask, %value : memref<16xf32>, vector<16xi1>, vector<16xf32>

@banach-space banach-space requested a review from dcaballe October 20, 2025 15:00
@banach-space banach-space merged commit fc7f340 into llvm:main Oct 24, 2025
20 of 22 checks passed
@banach-space banach-space deleted the andrzej/vector/rename_tests_nfc branch October 24, 2025 08:33
dvbuka pushed a commit to dvbuka/llvm-project that referenced this pull request Oct 27, 2025
)

Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders
for:
  * vector.maskedload, vector.maskedstore, vector.scatter,
    vector.gather, vector.compressstore, vector.expandload.

This patch renames and documents these tests in accordance with:
  * https://mlir.llvm.org/getting_started/TestingGuide/

Note: the updated tests are referenced in the Test Formatting Best
Practices section of the MLIR testing guide:
* https://mlir.llvm.org/getting_started/TestingGuide/#test-formatting-best-practices

Keeping them aligned with the guidelines ensures consistency and clarity
across MLIR’s test suite.
Lukacma pushed a commit to Lukacma/llvm-project that referenced this pull request Oct 29, 2025
)

Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders
for:
  * vector.maskedload, vector.maskedstore, vector.scatter,
    vector.gather, vector.compressstore, vector.expandload.

This patch renames and documents these tests in accordance with:
  * https://mlir.llvm.org/getting_started/TestingGuide/

Note: the updated tests are referenced in the Test Formatting Best
Practices section of the MLIR testing guide:
* https://mlir.llvm.org/getting_started/TestingGuide/#test-formatting-best-practices

Keeping them aligned with the guidelines ensures consistency and clarity
across MLIR’s test suite.
aokblast pushed a commit to aokblast/llvm-project that referenced this pull request Oct 30, 2025
)

Tests in "fold_maskedload_to_load_all_true_dynamic" excercise folders
for:
  * vector.maskedload, vector.maskedstore, vector.scatter,
    vector.gather, vector.compressstore, vector.expandload.

This patch renames and documents these tests in accordance with:
  * https://mlir.llvm.org/getting_started/TestingGuide/

Note: the updated tests are referenced in the Test Formatting Best
Practices section of the MLIR testing guide:
* https://mlir.llvm.org/getting_started/TestingGuide/#test-formatting-best-practices

Keeping them aligned with the guidelines ensures consistency and clarity
across MLIR’s test suite.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants