Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mlir][sparse] Migrate tests to use new syntax #66146

Merged
merged 3 commits into from
Sep 13, 2023

Conversation

yinying-lisa-li
Copy link
Contributor

lvlTypes = [ "compressed" ] to map = (d0) -> (d0 : compressed)
lvlTypes = [ "dense" ] to map = (d0) -> (d0 : dense)

lvlTypes = [ "compressed" ] to map = (d0) -> (d0 : compressed)
lvlTypes = [ "dense" ] to map = (d0) -> (d0 : dense)
@yinying-lisa-li yinying-lisa-li added the mlir:sparse Sparse compiler in MLIR label Sep 12, 2023
@llvmbot llvmbot added mlir:core MLIR Core Infrastructure mlir labels Sep 12, 2023
@llvmbot
Copy link
Collaborator

llvmbot commented Sep 12, 2023

@llvm/pr-subscribers-mlir
@llvm/pr-subscribers-mlir-core

@llvm/pr-subscribers-mlir-sparse

Changes lvlTypes = [ "compressed" ] to map = (d0) -> (d0 : compressed) lvlTypes = [ "dense" ] to map = (d0) -> (d0 : dense) --

Patch is 58.67 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/66146.diff

64 Files Affected:

  • (modified) mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/codegen.mlir (+2-2)
  • (modified) mlir/test/Dialect/SparseTensor/codegen_buffer_initialization.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/codegen_to_llvm.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/constant_index_map.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/conversion.mlir (+3-3)
  • (modified) mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/convert_sparse2sparse.mlir (+3-3)
  • (modified) mlir/test/Dialect/SparseTensor/convert_sparse2sparse_element.mlir (+2-2)
  • (modified) mlir/test/Dialect/SparseTensor/fold.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/invalid.mlir (+14-14)
  • (modified) mlir/test/Dialect/SparseTensor/one_shot_bufferize_invalid.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/one_shot_bufferize_tensor_copy_insertion.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/post_rewriting.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/pre_rewriting.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/rejected.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/roundtrip.mlir (+14-14)
  • (modified) mlir/test/Dialect/SparseTensor/scf_1_N_conversion.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_1d.mlir (+2-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_3d.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_affine.mlir (+2-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_expand.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_fp_ops.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_fusion.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_int_ops.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_kernels.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_outbuf.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_reshape.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_storage.mlir (+2-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector.mlir (+3-3)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_index.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_ops.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_peeled.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/unsparsifiable_dense_op.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/vectorize_reduction.mlir (+7-7)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cast.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_collapse_shape.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex32.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex64.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex_ops.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_dot.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_expand_shape.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_index.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_index_dense.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_1d.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_re_im.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom_prod.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom_sum.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reductions.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reductions_min.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reductions_prod.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reshape.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_select.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sign.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tanh.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_unary.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_vector_ops.mlir (+2-2)
  • (modified) mlir/test/python/dialects/sparse_tensor/dialect.py (+2-2)

<pre>
diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
index e2f3df005b70d69..8e2de77e6850a31 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
+++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
@@ -194,7 +194,7 @@ def SparseTensorEncodingAttr : SparseTensor_Attr&lt;&quot;SparseTensorEncoding&quot;,
```mlir
// Sparse vector.
#SparseVector = #sparse_tensor.encoding&lt;{

  •  lvlTypes = [ &amp;quot;compressed&amp;quot; ]
    
  •  map = (d0) -&amp;gt; (d0 : compressed)
    
    }&gt;
    ... tensor&lt;?xf32, #SparseVector&gt; ...

diff --git a/mlir/test/Dialect/SparseTensor/codegen.mlir b/mlir/test/Dialect/SparseTensor/codegen.mlir
index cf8b1ba87d30357..5155e5ce6c45474 100644
--- a/mlir/test/Dialect/SparseTensor/codegen.mlir
+++ b/mlir/test/Dialect/SparseTensor/codegen.mlir
@@ -1,9 +1,9 @@
// RUN: mlir-opt %s --sparse-tensor-codegen --canonicalize -cse | FileCheck %s

-#SV = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;compressed&quot; ] }&gt;
+#SV = #sparse_tensor.encoding&lt;{ map = (d0) -&gt; (d0 : compressed) }&gt;

#SparseVector = #sparse_tensor.encoding&lt;{

  • lvlTypes = [ &quot;compressed&quot; ],
  • map = (d0) -&gt; (d0 : compressed),
    crdWidth = 64,
    posWidth = 32
    }&gt;
    diff --git a/mlir/test/Dialect/SparseTensor/codegen_buffer_initialization.mlir b/mlir/test/Dialect/SparseTensor/codegen_buffer_initialization.mlir
    index 0a338064eff323e..640d0f56a0f94b3 100644
    --- a/mlir/test/Dialect/SparseTensor/codegen_buffer_initialization.mlir
    +++ b/mlir/test/Dialect/SparseTensor/codegen_buffer_initialization.mlir
    @@ -1,6 +1,6 @@
    // RUN: mlir-opt %s --sparse-tensor-codegen=enable-buffer-initialization=true --canonicalize --cse | FileCheck %s

-#SV = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;compressed&quot; ] }&gt;
+#SV = #sparse_tensor.encoding&lt;{ map = (d0) -&gt; (d0 : compressed) }&gt;

// CHECK-LABEL: func.func @sparse_alloc_sparse_vector(
// CHECK-SAME: %[[VAL_0:.*]]: index) -&gt; (memref&lt;?xindex&gt;, memref&lt;?xindex&gt;, memref&lt;?xf64&gt;, !sparse_tensor.storage_specifier
diff --git a/mlir/test/Dialect/SparseTensor/codegen_to_llvm.mlir b/mlir/test/Dialect/SparseTensor/codegen_to_llvm.mlir
index 99b875980654446..b0f7c62ef283aad 100644
--- a/mlir/test/Dialect/SparseTensor/codegen_to_llvm.mlir
+++ b/mlir/test/Dialect/SparseTensor/codegen_to_llvm.mlir
@@ -1,6 +1,6 @@
// RUN: mlir-opt %s --sparse-tensor-codegen --sparse-storage-specifier-to-llvm | FileCheck %s

-#SparseVector = #sparse_tensor.encoding&lt;{ lvlTypes = [&quot;compressed&quot;] }&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{ map = (d0) -&gt; (d0 : compressed) }&gt;

// CHECK-LABEL: func @sparse_nop(
// CHECK-SAME: %[[A0:.*0]]: memref&lt;?xindex&gt;,
diff --git a/mlir/test/Dialect/SparseTensor/constant_index_map.mlir b/mlir/test/Dialect/SparseTensor/constant_index_map.mlir
index 532b95507d54820..bfb4503edbc4e40 100644
--- a/mlir/test/Dialect/SparseTensor/constant_index_map.mlir
+++ b/mlir/test/Dialect/SparseTensor/constant_index_map.mlir
@@ -5,7 +5,7 @@
#map1 = affine_map&lt;(d0) -&gt; (0, d0)&gt;
#map2 = affine_map&lt;(d0) -&gt; (d0)&gt;

-#SpVec = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;compressed&quot; ] }&gt;
+#SpVec = #sparse_tensor.encoding&lt;{ map = (d0) -&gt; (d0 : compressed) }&gt;

// CHECK-LABEL: func.func @main(
// CHECK-SAME: %[[VAL_0:.*0]]: tensor&lt;1x77xi1&gt;,
diff --git a/mlir/test/Dialect/SparseTensor/conversion.mlir b/mlir/test/Dialect/SparseTensor/conversion.mlir
index aa432460173cf7c..ae9e312de7f2747 100644
--- a/mlir/test/Dialect/SparseTensor/conversion.mlir
+++ b/mlir/test/Dialect/SparseTensor/conversion.mlir
@@ -1,17 +1,17 @@
// RUN: mlir-opt %s --sparse-tensor-conversion --canonicalize --cse | FileCheck %s

#SparseVector = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;compressed&quot;]
  • map = (d0) -&gt; (d0 : compressed)
    }&gt;

#SparseVector64 = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;compressed&quot;],
  • map = (d0) -&gt; (d0 : compressed),
    posWidth = 64,
    crdWidth = 64
    }&gt;

#SparseVector32 = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;compressed&quot;],
  • map = (d0) -&gt; (d0 : compressed),
    posWidth = 32,
    crdWidth = 32
    }&gt;
    diff --git a/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir b/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir
    index ac9a613134ed500..f2ac0c22e035ee4 100644
    --- a/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir
    +++ b/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir
    @@ -3,7 +3,7 @@
    // RUN: --canonicalize --cse | FileCheck %s --check-prefix=CHECK-RWT

#SparseVector = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;compressed&quot;]
  • map = (d0) -&gt; (d0 : compressed)
    }&gt;

#CSR = #sparse_tensor.encoding&lt;{
diff --git a/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir b/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
index 1adc9f9566da3d7..7328ede697d96a9 100644
--- a/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
+++ b/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
@@ -4,7 +4,7 @@
// RUN: --canonicalize --cse | FileCheck %s --check-prefix=CHECK-RWT

#SparseVector = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;compressed&quot;]
  • map = (d0) -&gt; (d0 : compressed)
    }&gt;

#SparseMatrix = #sparse_tensor.encoding&lt;{
diff --git a/mlir/test/Dialect/SparseTensor/convert_sparse2sparse.mlir b/mlir/test/Dialect/SparseTensor/convert_sparse2sparse.mlir
index a0435bd9b0edcc5..296e1bf9030c624 100644
--- a/mlir/test/Dialect/SparseTensor/convert_sparse2sparse.mlir
+++ b/mlir/test/Dialect/SparseTensor/convert_sparse2sparse.mlir
@@ -10,19 +10,19 @@
// RUN: --canonicalize --cse | FileCheck %s --check-prefix=CHECK-RWT

#SparseVector64 = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;compressed&quot;],
  • map = (d0) -&gt; (d0 : compressed),
    posWidth = 64,
    crdWidth = 64
    }&gt;

#SparseVector32 = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;compressed&quot;],
  • map = (d0) -&gt; (d0 : compressed),
    posWidth = 32,
    crdWidth = 32
    }&gt;

#SparseVector = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;compressed&quot;]
  • map = (d0) -&gt; (d0 : compressed)
    }&gt;

#SortedCOO2D = #sparse_tensor.encoding&lt;{
diff --git a/mlir/test/Dialect/SparseTensor/convert_sparse2sparse_element.mlir b/mlir/test/Dialect/SparseTensor/convert_sparse2sparse_element.mlir
index 6d20cc01a4eb73f..19a789ea2449a81 100644
--- a/mlir/test/Dialect/SparseTensor/convert_sparse2sparse_element.mlir
+++ b/mlir/test/Dialect/SparseTensor/convert_sparse2sparse_element.mlir
@@ -1,13 +1,13 @@
// RUN: mlir-opt %s --sparse-tensor-codegen --canonicalize --cse | FileCheck %s

#SparseVector64 = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;compressed&quot;],
  • map = (d0) -&gt; (d0 : compressed),
    posWidth = 64,
    crdWidth = 64
    }&gt;

#SparseVector32 = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;compressed&quot;],
  • map = (d0) -&gt; (d0 : compressed),
    posWidth = 32,
    crdWidth = 32
    }&gt;
    diff --git a/mlir/test/Dialect/SparseTensor/fold.mlir b/mlir/test/Dialect/SparseTensor/fold.mlir
    index 44eb8ac1fb64b07..089431f9e18e907 100644
    --- a/mlir/test/Dialect/SparseTensor/fold.mlir
    +++ b/mlir/test/Dialect/SparseTensor/fold.mlir
    @@ -1,6 +1,6 @@
    // RUN: mlir-opt %s --canonicalize --cse | FileCheck %s

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;

// CHECK-LABEL: func @sparse_nop_dense2dense_convert(
// CHECK-SAME: %[[A:.*]]: tensor&lt;64xf32&gt;)
diff --git a/mlir/test/Dialect/SparseTensor/invalid.mlir b/mlir/test/Dialect/SparseTensor/invalid.mlir
index 4849a1f25da6c3f..360dfcce2ef2bab 100644
--- a/mlir/test/Dialect/SparseTensor/invalid.mlir
+++ b/mlir/test/Dialect/SparseTensor/invalid.mlir
@@ -8,7 +8,7 @@ func.func @invalid_new_dense(%arg0: !llvm.ptr&lt;i8&gt;) -&gt; tensor&lt;32xf32&gt; {

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;], posWidth=32, crdWidth=32}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed), posWidth=32, crdWidth=32}&gt;

func.func @non_static_pack_ret(%values: tensor&lt;6xf64&gt;, %pos: tensor&lt;2xi32&gt;, %coordinates: tensor&lt;6x1xi32&gt;)
-&gt; tensor&lt;?xf64, #SparseVector&gt; {
@@ -20,7 +20,7 @@ func.func @non_static_pack_ret(%values: tensor&lt;6xf64&gt;, %pos: tensor&lt;2xi32&gt;, %coo

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;], posWidth=32, crdWidth=32}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed), posWidth=32, crdWidth=32}&gt;

func.func @invalid_pack_type(%values: tensor&lt;6xf64&gt;, %pos: tensor&lt;2xi32&gt;, %coordinates: tensor&lt;6x1xi32&gt;)
-&gt; tensor&lt;100xf32, #SparseVector&gt; {
@@ -56,7 +56,7 @@ func.func @invalid_pack_mis_position(%values: tensor&lt;6xf64&gt;, %coordinates: tenso

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;], posWidth=32, crdWidth=32}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed), posWidth=32, crdWidth=32}&gt;

func.func @invalid_unpack_type(%sp: tensor&lt;100xf32, #SparseVector&gt;, %values: tensor&lt;6xf64&gt;, %pos: tensor&lt;2xi32&gt;, %coordinates: tensor&lt;6x1xi32&gt;) {
// expected-error@+1 {{input/output element-types don&#x27;t match}}
@@ -108,7 +108,7 @@ func.func @invalid_positions_unranked(%arg0: tensor&lt;*xf64&gt;) -&gt; memref&lt;?xindex&gt; {

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;], posWidth=32}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed), posWidth=32}&gt;

func.func @mismatch_positions_types(%arg0: tensor&lt;128xf64, #SparseVector&gt;) -&gt; memref&lt;?xindex&gt; {
// expected-error@+1 {{unexpected type for positions}}
@@ -118,7 +118,7 @@ func.func @mismatch_positions_types(%arg0: tensor&lt;128xf64, #SparseVector&gt;) -&gt; me

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;

func.func @positions_oob(%arg0: tensor&lt;128xf64, #SparseVector&gt;) -&gt; memref&lt;?xindex&gt; {
// expected-error@+1 {{requested level is out of bounds}}
@@ -144,7 +144,7 @@ func.func @invalid_indices_unranked(%arg0: tensor&lt;*xf64&gt;) -&gt; memref&lt;?xindex&gt; {

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;

func.func @mismatch_indices_types(%arg0: tensor&lt;?xf64, #SparseVector&gt;) -&gt; memref&lt;?xi32&gt; {
// expected-error@+1 {{unexpected type for coordinates}}
@@ -154,7 +154,7 @@ func.func @mismatch_indices_types(%arg0: tensor&lt;?xf64, #SparseVector&gt;) -&gt; memref

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;

func.func @indices_oob(%arg0: tensor&lt;128xf64, #SparseVector&gt;) -&gt; memref&lt;?xindex&gt; {
// expected-error@+1 {{requested level is out of bounds}}
@@ -172,7 +172,7 @@ func.func @invalid_values_dense(%arg0: tensor&lt;1024xf32&gt;) -&gt; memref&lt;?xf32&gt; {

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;

func.func @indices_buffer_noncoo(%arg0: tensor&lt;128xf64, #SparseVector&gt;) -&gt; memref&lt;?xindex&gt; {
// expected-error@+1 {{expected sparse tensor with a COO region}}
@@ -190,7 +190,7 @@ func.func @indices_buffer_dense(%arg0: tensor&lt;1024xf32&gt;) -&gt; memref&lt;?xindex&gt; {

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;

func.func @mismatch_values_types(%arg0: tensor&lt;?xf64, #SparseVector&gt;) -&gt; memref&lt;?xf32&gt; {
// expected-error@+1 {{unexpected mismatch in element types}}
@@ -226,7 +226,7 @@ func.func @sparse_slice_stride(%arg0: tensor&lt;2x8xf64, #CSR_SLICE&gt;) -&gt; index {

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;

func.func @sparse_get_md(%arg0: !sparse_tensor.storage_specifier&lt;#SparseVector&gt;) -&gt; index {
// expected-error@+1 {{redundant level argument for querying value memory size}}
@@ -237,7 +237,7 @@ func.func @sparse_get_md(%arg0: !sparse_tensor.storage_specifier&lt;#SparseVector&gt;)

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;

func.func @sparse_get_md(%arg0: !sparse_tensor.storage_specifier&lt;#SparseVector&gt;) -&gt; i64 {
// expected-error@+1 {{requested slice data on non-slice tensor}}
@@ -248,7 +248,7 @@ func.func @sparse_get_md(%arg0: !sparse_tensor.storage_specifier&lt;#SparseVector&gt;)

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;

func.func @sparse_get_md(%arg0: !sparse_tensor.storage_specifier&lt;#SparseVector&gt;) -&gt; index {
// expected-error@+1 {{missing level argument}}
@@ -259,7 +259,7 @@ func.func @sparse_get_md(%arg0: !sparse_tensor.storage_specifier&lt;#SparseVector&gt;)

// -----

-#SparseVector = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#SparseVector = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;

func.func @sparse_get_md(%arg0: !sparse_tensor.storage_specifier&lt;#SparseVector&gt;) -&gt; index {
// expected-error@+1 {{requested level is out of bounds}}
@@ -656,7 +656,7 @@ func.func @invalid_concat_dim(%arg0: tensor&lt;2x4xf64, #DC&gt;,

// -----

-#C = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;]}&gt;
+#C = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;
#DC = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]}&gt;
#DCC = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;, &quot;compressed&quot;]}&gt;
func.func @invalid_concat_rank_mismatch(%arg0: tensor&lt;2xf64, #C&gt;,
diff --git a/mlir/test/Dialect/SparseTensor/one_shot_bufferize_invalid.mlir b/mlir/test/Dialect/SparseTensor/one_shot_bufferize_invalid.mlir
index 25ecd20c3800351..1540d1876d7f061 100644
--- a/mlir/test/Dialect/SparseTensor/one_shot_bufferize_invalid.mlir
+++ b/mlir/test/Dialect/SparseTensor/one_shot_bufferize_invalid.mlir
@@ -1,7 +1,7 @@
// RUN: mlir-opt %s -one-shot-bufferi...

@yinying-lisa-li yinying-lisa-li merged commit dbe1be9 into llvm:main Sep 13, 2023
2 of 3 checks passed
@yinying-lisa-li yinying-lisa-li deleted the migrate_new_syntax branch September 13, 2023 15:41
yinying-lisa-li added a commit that referenced this pull request Sep 14, 2023
CSR:
`lvlTypes = [ "dense", "compressed" ]` to `map = (d0, d1) -> (d0 :
dense, d1 : compressed)`

CSC:
`lvlTypes = [ "dense", "compressed" ], dimToLvl = affine_map<(d0, d1) ->
(d1, d0)>` to `map = (d0, d1) -> (d1 : dense, d0 : compressed)`

This is an ongoing effort: #66146
yinying-lisa-li added a commit that referenced this pull request Sep 14, 2023
**Dense**
`lvlTypes = [ "dense", "dense" ]` to `map = (d0, d1) -> (d0 : dense, d1
: dense)`
`lvlTypes = [ "dense", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)>`
to `map = (d0, d1) -> (d1 : dense, d0 : dense)`

**DCSR**
`lvlTypes = [ "compressed", "compressed" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : compressed)`

**DCSC**
`lvlTypes = [ "compressed", "compressed" ], dimToLvl = affine_map<(i,j)
-> (j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : compressed)`

**Block Row**
`lvlTypes = [ "compressed", "dense" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : dense)`

**Block Column**
`lvlTypes = [ "compressed", "dense" ], dimToLvl = affine_map<(i,j) ->
(j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : dense)`

This is an ongoing effort: #66146, #66309
yinying-lisa-li added a commit that referenced this pull request Sep 15, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: #66146, #66309, #66443
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
lvlTypes = [ "compressed" ] to map = (d0) -> (d0 : compressed)
lvlTypes = [ "dense" ] to map = (d0) -> (d0 : dense)
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
CSR:
`lvlTypes = [ "dense", "compressed" ]` to `map = (d0, d1) -> (d0 :
dense, d1 : compressed)`

CSC:
`lvlTypes = [ "dense", "compressed" ], dimToLvl = affine_map<(d0, d1) ->
(d1, d0)>` to `map = (d0, d1) -> (d1 : dense, d0 : compressed)`

This is an ongoing effort: llvm#66146
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
**Dense**
`lvlTypes = [ "dense", "dense" ]` to `map = (d0, d1) -> (d0 : dense, d1
: dense)`
`lvlTypes = [ "dense", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)>`
to `map = (d0, d1) -> (d1 : dense, d0 : dense)`

**DCSR**
`lvlTypes = [ "compressed", "compressed" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : compressed)`

**DCSC**
`lvlTypes = [ "compressed", "compressed" ], dimToLvl = affine_map<(i,j)
-> (j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : compressed)`

**Block Row**
`lvlTypes = [ "compressed", "dense" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : dense)`

**Block Column**
`lvlTypes = [ "compressed", "dense" ], dimToLvl = affine_map<(i,j) ->
(j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : dense)`

This is an ongoing effort: llvm#66146, llvm#66309
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: llvm#66146, llvm#66309, llvm#66443
zahiraam pushed a commit to tahonermann/llvm-project that referenced this pull request Oct 24, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: llvm#66146, llvm#66309, llvm#66443
zahiraam pushed a commit to tahonermann/llvm-project that referenced this pull request Oct 24, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: llvm#66146, llvm#66309, llvm#66443
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:core MLIR Core Infrastructure mlir:sparse Sparse compiler in MLIR mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants