Skip to content

Conversation

@ginsbach
Copy link
Contributor

@ginsbach ginsbach commented Nov 9, 2025

Currently, the following two snippets get treated very differently from each other (https://godbolt.org/z/rYGj9TGz6):

define <8 x i8> @foo(<8 x i8> %x, <8 x i8> %y) local_unnamed_addr #0 {
entry:
  %0 = shufflevector <8 x i8> %x, <8 x i8> %y, <8 x i32>
       <i32 0, i32 8, i32 1, i32 9, i32 2, i32 10, i32 3, i32 11>
  ret <8 x i8> %0
}

define <8 x i8> @bar(<8 x i8> %x, <8 x i8> %y) local_unnamed_addr #0 {
entry:
  %0 = shufflevector <8 x i8> %x, <8 x i8> %y, <8 x i32>
       <i32 8, i32 0, i32 9, i32 1, i32 10, i32 2, i32 11, i32 3>
  ret <8 x i8> %0
}
foo:                                    // @foo
        zip1    v0.8b, v0.8b, v1.8b
        ret
.LCPI1_0:
        .byte   8                               // 0x8
        .byte   0                               // 0x0
        .byte   9                               // 0x9
        .byte   1                               // 0x1
        .byte   10                              // 0xa
        .byte   2                               // 0x2
        .byte   11                              // 0xb
        .byte   3                               // 0x3
bar:                                    // @bar
        adrp    x8, .LCPI1_0
        mov     v0.d[1], v1.d[0]
        ldr     d1, [x8, :lo12:.LCPI1_0]
        tbl     v0.8b, { v0.16b }, v1.8b
        ret

The reason is that isZIPMask does not recognise the pattern when the operands are flipped.

This PR fixes isZIPMask so that both foo and bar get compiled as expected:

foo:                                    // @foo
	zip1	v0.8b, v0.8b, v1.8b
	ret
bar:                                    // @bar
	zip1	v0.8b, v1.8b, v0.8b
	ret

I intend to open a similar follow-up PR for isTRNMask, which seems to have the same problem.

I noticed this while working on #137447, though the change does not on itself fix that issue.

@github-actions
Copy link

github-actions bot commented Nov 9, 2025

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@ginsbach ginsbach marked this pull request as ready for review November 9, 2025 19:19
@llvmbot
Copy link
Member

llvmbot commented Nov 9, 2025

@llvm/pr-subscribers-backend-aarch64

Author: Philip Ginsbach-Chen (ginsbach)

Changes

Currently, the following two snippets get treated very differently from each other (https://godbolt.org/z/rYGj9TGz6):

define &lt;8 x i8&gt; @<!-- -->foo(&lt;8 x i8&gt; %x, &lt;8 x i8&gt; %y) local_unnamed_addr #<!-- -->0 {
entry:
  %0 = shufflevector &lt;8 x i8&gt; %x, &lt;8 x i8&gt; %y, &lt;8 x i32&gt;
       &lt;i32 0, i32 8, i32 1, i32 9, i32 2, i32 10, i32 3, i32 11&gt;
  ret &lt;8 x i8&gt; %0
}

define &lt;8 x i8&gt; @<!-- -->bar(&lt;8 x i8&gt; %x, &lt;8 x i8&gt; %y) local_unnamed_addr #<!-- -->0 {
entry:
  %0 = shufflevector &lt;8 x i8&gt; %x, &lt;8 x i8&gt; %y, &lt;8 x i32&gt;
       &lt;i32 8, i32 0, i32 9, i32 1, i32 10, i32 2, i32 11, i32 3&gt;
  ret &lt;8 x i8&gt; %0
}
foo:                                    // @<!-- -->foo
        zip1    v0.8b, v0.8b, v1.8b
        ret
.LCPI1_0:
        .byte   8                               // 0x8
        .byte   0                               // 0x0
        .byte   9                               // 0x9
        .byte   1                               // 0x1
        .byte   10                              // 0xa
        .byte   2                               // 0x2
        .byte   11                              // 0xb
        .byte   3                               // 0x3
bar:                                    // @<!-- -->bar
        adrp    x8, .LCPI1_0
        mov     v0.d[1], v1.d[0]
        ldr     d1, [x8, :lo12:.LCPI1_0]
        tbl     v0.8b, { v0.16b }, v1.8b
        ret

The reason is that isZIPMask does not recognise the pattern when the operands are flipped.

This PR fixes isZIPMask so that both foo and bar get compiled as expected:

foo:                                    // @<!-- -->foo
	zip1	v0.8b, v0.8b, v1.8b
	ret
bar:                                    // @<!-- -->bar
	zip1	v0.8b, v1.8b, v0.8b
	ret

I intend to open a similar follow-up PR for isTRNMask, which seems to have the same problem.

I noticed this while working on #137447, though the change does not on itself fix that issue.


Patch is 25.29 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/167235.diff

10 Files Affected:

  • (modified) llvm/lib/Target/AArch64/AArch64ISelLowering.cpp (+19-7)
  • (modified) llvm/lib/Target/AArch64/AArch64PerfectShuffle.h (+36-21)
  • (modified) llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp (+1-1)
  • (modified) llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerLowering.cpp (+2-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-zip.ll (+12-22)
  • (modified) llvm/test/CodeGen/AArch64/fixed-vector-deinterleave.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/insert-extend.ll (+44-44)
  • (modified) llvm/test/CodeGen/AArch64/insert-subvector.ll (+11-26)
  • (modified) llvm/test/CodeGen/AArch64/neon-widen-shuffle.ll (+2-3)
  • (modified) llvm/test/CodeGen/AArch64/reduce-shuffle.ll (+12-14)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index c8a038fa99b30..8b31adae09d38 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -14582,9 +14582,12 @@ SDValue AArch64TargetLowering::LowerVECTOR_SHUFFLE(SDValue Op,
   }
 
   unsigned WhichResult;
-  if (isZIPMask(ShuffleMask, NumElts, WhichResult)) {
+  unsigned OperandOrder;
+  if (isZIPMask(ShuffleMask, NumElts, WhichResult, OperandOrder)) {
     unsigned Opc = (WhichResult == 0) ? AArch64ISD::ZIP1 : AArch64ISD::ZIP2;
-    return DAG.getNode(Opc, DL, V1.getValueType(), V1, V2);
+    return DAG.getNode(Opc, DL, V1.getValueType(),
+                       (OperandOrder == 0) ? V1 : V2,
+                       (OperandOrder == 0) ? V2 : V1);
   }
   if (isUZPMask(ShuffleMask, NumElts, WhichResult)) {
     unsigned Opc = (WhichResult == 0) ? AArch64ISD::UZP1 : AArch64ISD::UZP2;
@@ -16306,7 +16309,7 @@ bool AArch64TargetLowering::isShuffleMaskLegal(ArrayRef<int> M, EVT VT) const {
           isSingletonEXTMask(M, VT, DummyUnsigned) ||
           isTRNMask(M, NumElts, DummyUnsigned) ||
           isUZPMask(M, NumElts, DummyUnsigned) ||
-          isZIPMask(M, NumElts, DummyUnsigned) ||
+          isZIPMask(M, NumElts, DummyUnsigned, DummyUnsigned) ||
           isTRN_v_undef_Mask(M, VT, DummyUnsigned) ||
           isUZP_v_undef_Mask(M, VT, DummyUnsigned) ||
           isZIP_v_undef_Mask(M, VT, DummyUnsigned) ||
@@ -31278,10 +31281,15 @@ SDValue AArch64TargetLowering::LowerFixedLengthVECTOR_SHUFFLEToSVE(
   }
 
   unsigned WhichResult;
-  if (isZIPMask(ShuffleMask, VT.getVectorNumElements(), WhichResult) &&
+  unsigned OperandOrder;
+  if (isZIPMask(ShuffleMask, VT.getVectorNumElements(), WhichResult,
+                OperandOrder) &&
       WhichResult == 0)
     return convertFromScalableVector(
-        DAG, VT, DAG.getNode(AArch64ISD::ZIP1, DL, ContainerVT, Op1, Op2));
+        DAG, VT,
+        DAG.getNode(AArch64ISD::ZIP1, DL, ContainerVT,
+                    OperandOrder == 0 ? Op1 : Op2,
+                    OperandOrder == 0 ? Op2 : Op1));
 
   if (isTRNMask(ShuffleMask, VT.getVectorNumElements(), WhichResult)) {
     unsigned Opc = (WhichResult == 0) ? AArch64ISD::TRN1 : AArch64ISD::TRN2;
@@ -31326,10 +31334,14 @@ SDValue AArch64TargetLowering::LowerFixedLengthVECTOR_SHUFFLEToSVE(
       return convertFromScalableVector(DAG, VT, Op);
     }
 
-    if (isZIPMask(ShuffleMask, VT.getVectorNumElements(), WhichResult) &&
+    if (isZIPMask(ShuffleMask, VT.getVectorNumElements(), WhichResult,
+                  OperandOrder) &&
         WhichResult != 0)
       return convertFromScalableVector(
-          DAG, VT, DAG.getNode(AArch64ISD::ZIP2, DL, ContainerVT, Op1, Op2));
+          DAG, VT,
+          DAG.getNode(AArch64ISD::ZIP2, DL, ContainerVT,
+                      OperandOrder == 0 ? Op1 : Op2,
+                      OperandOrder == 0 ? Op2 : Op1));
 
     if (isUZPMask(ShuffleMask, VT.getVectorNumElements(), WhichResult)) {
       unsigned Opc = (WhichResult == 0) ? AArch64ISD::UZP1 : AArch64ISD::UZP2;
diff --git a/llvm/lib/Target/AArch64/AArch64PerfectShuffle.h b/llvm/lib/Target/AArch64/AArch64PerfectShuffle.h
index f7beca1b8b77e..633e3d5c2e5ea 100644
--- a/llvm/lib/Target/AArch64/AArch64PerfectShuffle.h
+++ b/llvm/lib/Target/AArch64/AArch64PerfectShuffle.h
@@ -6623,34 +6623,49 @@ inline unsigned getPerfectShuffleCost(llvm::ArrayRef<int> M) {
 
 /// Return true for zip1 or zip2 masks of the form:
 ///  <0,  8, 1,  9, 2, 10, 3, 11> or
-///  <4, 12, 5, 13, 6, 14, 7, 15>
+///  <4, 12, 5, 13, 6, 14, 7, 15> or
+///  <8,  0, 9,  1, 10, 2, 11, 3> or
+///  <12, 4, 13, 5, 14, 6, 15, 7>
 inline bool isZIPMask(ArrayRef<int> M, unsigned NumElts,
-                      unsigned &WhichResultOut) {
+                      unsigned &WhichResultOut, unsigned &OperandOrderOut) {
   if (NumElts % 2 != 0)
     return false;
-  // Check the first non-undef element for which half to use.
-  unsigned WhichResult = 2;
-  for (unsigned i = 0; i != NumElts / 2; i++) {
-    if (M[i * 2] >= 0) {
-      WhichResult = ((unsigned)M[i * 2] == i ? 0 : 1);
-      break;
-    } else if (M[i * 2 + 1] >= 0) {
-      WhichResult = ((unsigned)M[i * 2 + 1] == NumElts + i ? 0 : 1);
-      break;
-    }
-  }
-  if (WhichResult == 2)
-    return false;
 
+  // "Variant" refers to the distinction bwetween zip1 and zip2, while
+  // "Order" refers to sequence of input registers (matching vs flipped).
+  bool Variant0Order0 = true;
+  bool Variant1Order0 = true;
+  bool Variant0Order1 = true;
+  bool Variant1Order1 = true;
   // Check all elements match.
-  unsigned Idx = WhichResult * NumElts / 2;
   for (unsigned i = 0; i != NumElts; i += 2) {
-    if ((M[i] >= 0 && (unsigned)M[i] != Idx) ||
-        (M[i + 1] >= 0 && (unsigned)M[i + 1] != Idx + NumElts))
-      return false;
-    Idx += 1;
+    if (M[i] >= 0) {
+      if ((unsigned)M[i] != i / 2)
+        Variant0Order0 = false;
+      if ((unsigned)M[i] != NumElts / 2 + i / 2)
+        Variant1Order0 = false;
+      if ((unsigned)M[i] != NumElts + i / 2)
+        Variant0Order1 = false;
+      if ((unsigned)M[i] != NumElts + NumElts / 2 + i / 2)
+        Variant1Order1 = false;
+    }
+    if (M[i + 1] >= 0) {
+      if ((unsigned)M[i + 1] != NumElts + i / 2)
+        Variant0Order0 = false;
+      if ((unsigned)M[i + 1] != NumElts + NumElts / 2 + i / 2)
+        Variant1Order0 = false;
+      if ((unsigned)M[i + 1] != i / 2)
+        Variant0Order1 = false;
+      if ((unsigned)M[i + 1] != NumElts / 2 + i / 2)
+        Variant1Order1 = false;
+    }
   }
-  WhichResultOut = WhichResult;
+
+  if (Variant0Order0 + Variant1Order0 + Variant0Order1 + Variant1Order1 != 1)
+    return false;
+
+  WhichResultOut = (Variant0Order0 || Variant0Order1) ? 0 : 1;
+  OperandOrderOut = (Variant0Order0 || Variant1Order0) ? 0 : 1;
   return true;
 }
 
diff --git a/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp b/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
index 197aae6e03cb1..8546f9eef08d3 100644
--- a/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
+++ b/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp
@@ -6041,7 +6041,7 @@ AArch64TTIImpl::getShuffleCost(TTI::ShuffleKind Kind, VectorType *DstTy,
   if (LT.second.isFixedLengthVector() &&
       LT.second.getVectorNumElements() == Mask.size() &&
       (Kind == TTI::SK_PermuteTwoSrc || Kind == TTI::SK_PermuteSingleSrc) &&
-      (isZIPMask(Mask, LT.second.getVectorNumElements(), Unused) ||
+      (isZIPMask(Mask, LT.second.getVectorNumElements(), Unused, Unused) ||
        isUZPMask(Mask, LT.second.getVectorNumElements(), Unused) ||
        isREVMask(Mask, LT.second.getScalarSizeInBits(),
                  LT.second.getVectorNumElements(), 16) ||
diff --git a/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerLowering.cpp b/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerLowering.cpp
index 23dcaea2ac1a4..b1945dc76f269 100644
--- a/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerLowering.cpp
+++ b/llvm/lib/Target/AArch64/GISel/AArch64PostLegalizerLowering.cpp
@@ -252,10 +252,11 @@ bool matchZip(MachineInstr &MI, MachineRegisterInfo &MRI,
               ShuffleVectorPseudo &MatchInfo) {
   assert(MI.getOpcode() == TargetOpcode::G_SHUFFLE_VECTOR);
   unsigned WhichResult;
+  unsigned OperandOrder;
   ArrayRef<int> ShuffleMask = MI.getOperand(3).getShuffleMask();
   Register Dst = MI.getOperand(0).getReg();
   unsigned NumElts = MRI.getType(Dst).getNumElements();
-  if (!isZIPMask(ShuffleMask, NumElts, WhichResult))
+  if (!isZIPMask(ShuffleMask, NumElts, WhichResult, OperandOrder))
     return false;
   unsigned Opc = (WhichResult == 0) ? AArch64::G_ZIP1 : AArch64::G_ZIP2;
   Register V1 = MI.getOperand(1).getReg();
diff --git a/llvm/test/CodeGen/AArch64/arm64-zip.ll b/llvm/test/CodeGen/AArch64/arm64-zip.ll
index 9b06620590cda..c1d4a317cdf3f 100644
--- a/llvm/test/CodeGen/AArch64/arm64-zip.ll
+++ b/llvm/test/CodeGen/AArch64/arm64-zip.ll
@@ -355,48 +355,38 @@ define <8 x i16> @combine_v8i16_undef(<4 x i16> %0, <4 x i16> %1) {
   ret <8 x i16> %3
 }
 
-; FIXME: This could be zip1 too, 8,0,9,1... pattern is handled
 define <16 x i8> @combine_v8i16_8first(<8 x i8> %0, <8 x i8> %1) {
 ; CHECK-SD-LABEL: combine_v8i16_8first:
 ; CHECK-SD:       // %bb.0:
-; CHECK-SD-NEXT:    // kill: def $d1 killed $d1 def $q1_q2
-; CHECK-SD-NEXT:    adrp x8, .LCPI25_0
-; CHECK-SD-NEXT:    fmov d2, d0
-; CHECK-SD-NEXT:    ldr q3, [x8, :lo12:.LCPI25_0]
-; CHECK-SD-NEXT:    tbl.16b v0, { v1, v2 }, v3
+; CHECK-SD-NEXT:    // kill: def $d0 killed $d0 def $q0
+; CHECK-SD-NEXT:    // kill: def $d1 killed $d1 def $q1
+; CHECK-SD-NEXT:    zip1.16b v0, v0, v1
 ; CHECK-SD-NEXT:    ret
 ;
 ; CHECK-GI-LABEL: combine_v8i16_8first:
 ; CHECK-GI:       // %bb.0:
-; CHECK-GI-NEXT:    // kill: def $d0 killed $d0 def $q31_q0
-; CHECK-GI-NEXT:    adrp x8, .LCPI25_0
-; CHECK-GI-NEXT:    fmov d31, d1
-; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI25_0]
-; CHECK-GI-NEXT:    tbl.16b v0, { v31, v0 }, v2
+; CHECK-GI-NEXT:    // kill: def $d0 killed $d0 def $q0
+; CHECK-GI-NEXT:    // kill: def $d1 killed $d1 def $q1
+; CHECK-GI-NEXT:    zip1.16b v0, v1, v0
 ; CHECK-GI-NEXT:    ret
   %3 = shufflevector <8 x i8> %1, <8 x i8> %0, <16 x i32> <i32 8, i32 0, i32 9, i32 1, i32 10, i32 2, i32 11, i32 3, i32 12, i32 4, i32 13, i32 5, i32 14, i32 6, i32 15, i32 7>
   ret <16 x i8> %3
 }
 
 
-; FIXME: This could be zip1 too, 8,0,9,1... pattern is handled
 define <16 x i8> @combine_v8i16_8firstundef(<8 x i8> %0, <8 x i8> %1) {
 ; CHECK-SD-LABEL: combine_v8i16_8firstundef:
 ; CHECK-SD:       // %bb.0:
-; CHECK-SD-NEXT:    // kill: def $d1 killed $d1 def $q1_q2
-; CHECK-SD-NEXT:    adrp x8, .LCPI26_0
-; CHECK-SD-NEXT:    fmov d2, d0
-; CHECK-SD-NEXT:    ldr q3, [x8, :lo12:.LCPI26_0]
-; CHECK-SD-NEXT:    tbl.16b v0, { v1, v2 }, v3
+; CHECK-SD-NEXT:    // kill: def $d0 killed $d0 def $q0
+; CHECK-SD-NEXT:    // kill: def $d1 killed $d1 def $q1
+; CHECK-SD-NEXT:    zip1.16b v0, v0, v1
 ; CHECK-SD-NEXT:    ret
 ;
 ; CHECK-GI-LABEL: combine_v8i16_8firstundef:
 ; CHECK-GI:       // %bb.0:
-; CHECK-GI-NEXT:    // kill: def $d0 killed $d0 def $q31_q0
-; CHECK-GI-NEXT:    adrp x8, .LCPI26_0
-; CHECK-GI-NEXT:    fmov d31, d1
-; CHECK-GI-NEXT:    ldr q2, [x8, :lo12:.LCPI26_0]
-; CHECK-GI-NEXT:    tbl.16b v0, { v31, v0 }, v2
+; CHECK-GI-NEXT:    // kill: def $d0 killed $d0 def $q0
+; CHECK-GI-NEXT:    // kill: def $d1 killed $d1 def $q1
+; CHECK-GI-NEXT:    zip1.16b v0, v1, v0
 ; CHECK-GI-NEXT:    ret
   %3 = shufflevector <8 x i8> %1, <8 x i8> %0, <16 x i32> <i32 8, i32 0, i32 9, i32 1, i32 10, i32 2, i32 11, i32 3, i32 12, i32 4, i32 13, i32 5, i32 14, i32 6, i32 15, i32 undef>
   ret <16 x i8> %3
diff --git a/llvm/test/CodeGen/AArch64/fixed-vector-deinterleave.ll b/llvm/test/CodeGen/AArch64/fixed-vector-deinterleave.ll
index 4ab5db450a7f3..282e0503dd7be 100644
--- a/llvm/test/CodeGen/AArch64/fixed-vector-deinterleave.ll
+++ b/llvm/test/CodeGen/AArch64/fixed-vector-deinterleave.ll
@@ -8,9 +8,9 @@ define {<2 x half>, <2 x half>} @vector_deinterleave_v2f16_v4f16(<4 x half> %vec
 ; CHECK-SD-NEXT:    // kill: def $d0 killed $d0 def $q0
 ; CHECK-SD-NEXT:    dup v2.2s, v0.s[1]
 ; CHECK-SD-NEXT:    mov v1.16b, v2.16b
+; CHECK-SD-NEXT:    zip1 v2.4h, v0.4h, v2.4h
 ; CHECK-SD-NEXT:    mov v1.h[0], v0.h[1]
-; CHECK-SD-NEXT:    mov v0.h[1], v2.h[0]
-; CHECK-SD-NEXT:    // kill: def $d0 killed $d0 killed $q0
+; CHECK-SD-NEXT:    fmov d0, d2
 ; CHECK-SD-NEXT:    // kill: def $d1 killed $d1 killed $q1
 ; CHECK-SD-NEXT:    ret
 ;
diff --git a/llvm/test/CodeGen/AArch64/insert-extend.ll b/llvm/test/CodeGen/AArch64/insert-extend.ll
index 851fb0d03e8aa..d5a278c88743b 100644
--- a/llvm/test/CodeGen/AArch64/insert-extend.ll
+++ b/llvm/test/CodeGen/AArch64/insert-extend.ll
@@ -66,86 +66,86 @@ define i32 @large(ptr nocapture noundef readonly %p1, i32 noundef %st1, ptr noca
 ; CHECK-NEXT:    ldr d5, [x11, x9]
 ; CHECK-NEXT:    shll2 v6.4s, v0.8h, #16
 ; CHECK-NEXT:    usubl v2.8h, v2.8b, v3.8b
+; CHECK-NEXT:    shll2 v7.4s, v1.8h, #16
 ; CHECK-NEXT:    usubl v3.8h, v4.8b, v5.8b
-; CHECK-NEXT:    shll2 v4.4s, v1.8h, #16
 ; CHECK-NEXT:    saddw v0.4s, v6.4s, v0.4h
-; CHECK-NEXT:    shll2 v6.4s, v2.8h, #16
-; CHECK-NEXT:    shll2 v5.4s, v3.8h, #16
-; CHECK-NEXT:    saddw v1.4s, v4.4s, v1.4h
-; CHECK-NEXT:    rev64 v4.4s, v0.4s
-; CHECK-NEXT:    saddw v2.4s, v6.4s, v2.4h
-; CHECK-NEXT:    saddw v3.4s, v5.4s, v3.4h
-; CHECK-NEXT:    rev64 v5.4s, v1.4s
-; CHECK-NEXT:    rev64 v6.4s, v2.4s
-; CHECK-NEXT:    sub v4.4s, v0.4s, v4.4s
+; CHECK-NEXT:    shll2 v5.4s, v2.8h, #16
+; CHECK-NEXT:    saddw v1.4s, v7.4s, v1.4h
+; CHECK-NEXT:    shll2 v4.4s, v3.8h, #16
+; CHECK-NEXT:    rev64 v6.4s, v0.4s
+; CHECK-NEXT:    saddw v2.4s, v5.4s, v2.4h
+; CHECK-NEXT:    rev64 v7.4s, v1.4s
+; CHECK-NEXT:    saddw v3.4s, v4.4s, v3.4h
+; CHECK-NEXT:    rev64 v4.4s, v2.4s
+; CHECK-NEXT:    sub v6.4s, v0.4s, v6.4s
 ; CHECK-NEXT:    addp v0.4s, v1.4s, v0.4s
-; CHECK-NEXT:    rev64 v7.4s, v3.4s
-; CHECK-NEXT:    sub v5.4s, v1.4s, v5.4s
-; CHECK-NEXT:    sub v6.4s, v2.4s, v6.4s
+; CHECK-NEXT:    rev64 v5.4s, v3.4s
+; CHECK-NEXT:    sub v7.4s, v1.4s, v7.4s
+; CHECK-NEXT:    sub v4.4s, v2.4s, v4.4s
 ; CHECK-NEXT:    addp v2.4s, v3.4s, v2.4s
-; CHECK-NEXT:    zip1 v16.4s, v5.4s, v4.4s
-; CHECK-NEXT:    sub v7.4s, v3.4s, v7.4s
-; CHECK-NEXT:    zip2 v3.4s, v6.4s, v7.4s
-; CHECK-NEXT:    mov v6.s[1], v7.s[0]
-; CHECK-NEXT:    ext v7.16b, v5.16b, v16.16b, #8
-; CHECK-NEXT:    mov v5.s[3], v4.s[2]
-; CHECK-NEXT:    ext v4.16b, v2.16b, v2.16b, #8
-; CHECK-NEXT:    mov v6.d[1], v7.d[1]
+; CHECK-NEXT:    zip1 v16.4s, v7.4s, v6.4s
+; CHECK-NEXT:    sub v5.4s, v3.4s, v5.4s
+; CHECK-NEXT:    zip1 v3.4s, v4.4s, v5.4s
+; CHECK-NEXT:    zip2 v4.4s, v4.4s, v5.4s
+; CHECK-NEXT:    ext v5.16b, v7.16b, v16.16b, #8
+; CHECK-NEXT:    mov v7.s[3], v6.s[2]
+; CHECK-NEXT:    ext v6.16b, v2.16b, v2.16b, #8
 ; CHECK-NEXT:    mov v3.d[1], v5.d[1]
-; CHECK-NEXT:    uzp1 v1.4s, v4.4s, v0.4s
-; CHECK-NEXT:    uzp2 v4.4s, v4.4s, v0.4s
+; CHECK-NEXT:    mov v4.d[1], v7.d[1]
+; CHECK-NEXT:    uzp1 v1.4s, v6.4s, v0.4s
+; CHECK-NEXT:    uzp2 v5.4s, v6.4s, v0.4s
 ; CHECK-NEXT:    addp v0.4s, v2.4s, v0.4s
-; CHECK-NEXT:    add v5.4s, v3.4s, v6.4s
-; CHECK-NEXT:    sub v3.4s, v6.4s, v3.4s
+; CHECK-NEXT:    add v6.4s, v4.4s, v3.4s
+; CHECK-NEXT:    sub v3.4s, v3.4s, v4.4s
 ; CHECK-NEXT:    rev64 v7.4s, v0.4s
-; CHECK-NEXT:    sub v1.4s, v1.4s, v4.4s
-; CHECK-NEXT:    rev64 v4.4s, v5.4s
-; CHECK-NEXT:    rev64 v6.4s, v3.4s
-; CHECK-NEXT:    addp v16.4s, v0.4s, v5.4s
+; CHECK-NEXT:    sub v1.4s, v1.4s, v5.4s
+; CHECK-NEXT:    rev64 v4.4s, v6.4s
+; CHECK-NEXT:    rev64 v5.4s, v3.4s
+; CHECK-NEXT:    addp v16.4s, v0.4s, v6.4s
 ; CHECK-NEXT:    rev64 v2.4s, v1.4s
 ; CHECK-NEXT:    sub v0.4s, v0.4s, v7.4s
 ; CHECK-NEXT:    zip1 v21.4s, v16.4s, v16.4s
-; CHECK-NEXT:    sub v4.4s, v5.4s, v4.4s
-; CHECK-NEXT:    addp v5.4s, v1.4s, v3.4s
-; CHECK-NEXT:    sub v3.4s, v3.4s, v6.4s
+; CHECK-NEXT:    sub v4.4s, v6.4s, v4.4s
+; CHECK-NEXT:    addp v6.4s, v1.4s, v3.4s
+; CHECK-NEXT:    sub v3.4s, v3.4s, v5.4s
 ; CHECK-NEXT:    sub v1.4s, v1.4s, v2.4s
 ; CHECK-NEXT:    ext v7.16b, v0.16b, v16.16b, #4
 ; CHECK-NEXT:    ext v2.16b, v16.16b, v4.16b, #4
-; CHECK-NEXT:    ext v6.16b, v5.16b, v3.16b, #4
+; CHECK-NEXT:    ext v5.16b, v6.16b, v3.16b, #4
 ; CHECK-NEXT:    mov v19.16b, v4.16b
-; CHECK-NEXT:    ext v17.16b, v1.16b, v5.16b, #8
+; CHECK-NEXT:    ext v17.16b, v1.16b, v6.16b, #8
 ; CHECK-NEXT:    mov v20.16b, v3.16b
 ; CHECK-NEXT:    trn2 v0.4s, v21.4s, v0.4s
 ; CHECK-NEXT:    ext v7.16b, v7.16b, v7.16b, #4
 ; CHECK-NEXT:    mov v19.s[2], v16.s[3]
 ; CHECK-NEXT:    zip2 v2.4s, v2.4s, v16.4s
-; CHECK-NEXT:    zip2 v6.4s, v6.4s, v5.4s
-; CHECK-NEXT:    mov v20.s[2], v5.s[3]
+; CHECK-NEXT:    zip2 v5.4s, v5.4s, v6.4s
+; CHECK-NEXT:    mov v20.s[2], v6.s[3]
 ; CHECK-NEXT:    ext v18.16b, v17.16b, v1.16b, #4
-; CHECK-NEXT:    mov v1.s[2], v5.s[1]
+; CHECK-NEXT:    mov v1.s[2], v6.s[1]
 ; CHECK-NEXT:    mov v21.16b, v7.16b
 ; CHECK-NEXT:    sub v7.4s, v0.4s, v7.4s
 ; CHECK-NEXT:    ext v2.16b, v4.16b, v2.16b, #12
-; CHECK-NEXT:    ext v3.16b, v3.16b, v6.16b, #12
+; CHECK-NEXT:    ext v3.16b, v3.16b, v5.16b, #12
 ; CHECK-NEXT:    uzp2 v4.4s, v17.4s, v18.4s
-; CHECK-NEXT:    mov v6.16b, v1.16b
+; CHECK-NEXT:    mov v5.16b, v1.16b
 ; CHECK-NEXT:    mov v17.16b, v19.16b
 ; CHECK-NEXT:    mov v18.16b, v20.16b
 ; CHECK-NEXT:    mov v21.s[0], v16.s[1]
-; CHECK-NEXT:    mov v6.s[1], v5.s[0]
+; CHECK-NEXT:    mov v5.s[1], v6.s[0]
 ; CHECK-NEXT:    mov v17.s[1], v16.s[2]
 ; CHECK-NEXT:    sub v16.4s, v19.4s, v2.4s
-; CHECK-NEXT:    mov v18.s[1], v5.s[2]
+; CHECK-NEXT:    mov v18.s[1], v6.s[2]
 ; CHECK-NEXT:    sub v1.4s, v1.4s, v4.4s
-; CHECK-NEXT:    sub v5.4s, v20.4s, v3.4s
+; CHECK-NEXT:    sub v6.4s, v20.4s, v3.4s
 ; CHECK-NEXT:    add v0.4s, v0.4s, v21.4s
-; CHECK-NEXT:    add v4.4s, v6.4s, v4.4s
+; CHECK-NEXT:    add v4.4s, v5.4s, v4.4s
 ; CHECK-NEXT:    add v2.4s, v17.4s, v2.4s
 ; CHECK-NEXT:    add v3.4s, v18.4s, v3.4s
 ; CHECK-NEXT:    mov v0.d[1], v7.d[1]
 ; CHECK-NEXT:    mov v4.d[1], v1.d[1]
 ; CHECK-NEXT:    mov v2.d[1], v16.d[1]
-; CHECK-NEXT:    mov v3.d[1], v5.d[1]
+; CHECK-NEXT:    mov v3.d[1], v6.d[1]
 ; CHECK-NEXT:    cmlt v7.8h, v0.8h, #0
 ; CHECK-NEXT:    cmlt v1.8h, v4.8h, #0
 ; CHECK-NEXT:    cmlt v6.8h, v2.8h, #0
diff --git a/llvm/test/CodeGen/AArch64/insert-subvector.ll b/llvm/test/CodeGen/AArch64/insert-subvector.ll
index 6828fa9f1508c..88b6ea4f0cb19 100644
--- a/llvm/test/CodeGen/AArch64/insert-subvector.ll
+++ b/llvm/test/CodeGen/AArch64/insert-subvector.ll
@@ -102,10 +102,7 @@ define <8 x i8> @insert_v8i8_4_1(float %tmp, <8 x i8> %b, <8 x i8> %a) {
 define <8 x i8> @insert_v8i8_4_2(float %tmp, <8 x i8> %b, <8 x i8> %a) {
 ; CHECK-LABEL: insert_v8i8_4_2:
 ; CHECK:       // %bb.0:
-; CHECK-NEXT:    fmov d0, d1
-; CHECK-NEXT:    // kill: def $d2 killed $d2 def $q2
-; CHECK-NEXT:    mov v0.s[1], v2.s[0]
-; CHECK-NEXT:    // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT:    zip1 v0.2s, v1.2s, v2.2s
 ; CHECK-NEXT:    ret
   %s2 = shufflevector <8 x i8> %a, <8 x i8> %b, <8 x i32> <i32 8, i32 9, i32 10, i32 11, i32 0, i32 1, i32 2, i32 3>
   ret <8 x i8> %s2
@@ -124,8 +121,7 @@ define <16 x i8> @insert_v16i8_8_1(float %tmp, <16 x i8> %b, <16 x i8> %a) {
 define <16 x i8> @insert_v16i8_8_2(float %tmp, <16 x i8> %b, <16 x i8> %a) {
 ; CHECK-LABEL: insert_v16i8_8_2:
 ; CHECK:       // %bb.0:
-; CHECK-NEXT:    mov v0.16b, v1.16b
-; CHECK-NEXT:    mov v0.d[1], v2.d[0]
+; CHECK-NEXT:    zip1 v0.2d, v1.2d, v2.2d
 ; CHECK-NEXT:    ret
   %s2 = shufflevector <16 x i8> %a, <16 x i8> %b, <16 x i32> <i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
   ret <16 x i8> %s2
@@ -201,10 +197,7 @@ define <4 x i16> @insert_v4i16_2_1(float %tmp, <4 x i16> %b, <4 x i16> %a) {
 define <4 x i16> @insert_v4i16_2_2(float %tmp, <4 x i16> %b, <4 x i16> %a) {
 ; CHECK-LABEL: insert_v4i16_2_2:
 ; CHECK:       // %bb.0:
-; CHECK-NEXT:    fmov d0, d1
-; CHECK-NEXT:    // kill: def $d2 killed $d2 def $q2
-; CHECK-NEXT:    mov v0.s[1], v2.s[0]
-; CHECK-NEXT:    // kill: def $d0 killed $d0 killed $q0
+; CHECK-NEXT:    zip1 v0.2s, v1.2s, v2.2s
 ; CHECK-NEXT:    ret
   %s2 = shufflevector <4 x i16> %a, <4 x i16> %b, <4 x i32> <i32 4, i32 5, i32 0, i32 1>
   ret <4 x i16> %s2
@@ -223,8 +216,7 @@ define <8 x i16> @insert_v8i16_4_1(float %tmp, <8 x i16> %b, <8 x i16> %a) {
 define <8 x i16> @insert_v8i16_4_2(float %tmp, <8 x i16> %b, <8 x i16> %a) {
 ; CHECK-LABEL: insert_v8i16_4_2:
 ; CHECK:       // %bb.0:
-; CHECK-NEXT:    mov v0.16b, v1.16b
-; CHECK-NEXT:    mov v0.d[1], v2.d[0]
+; CHECK-NEXT:    zip1 v0.2d, v1.2d, v2.2d
 ; CHECK-NEXT:    ret
   %s2 = shufflevector <8 x i16> %a, <8 x i16> %b, <8 x i32> <i32 8, i32 9, i32 10, i32 11, i32 0, i32 1, i32 2, i32 3>
   ret <8 x i16> %s2
@@ -245,8 +237,7 @@ define <4 x i32> @insert_v4i32_2_1(float %tmp, <4 x i32> %b, <4 x i32> %a) {
 define <4 x i32> @insert_v4i32_2_2(float %tmp, <4 x i32> %b, <4 x i32> %a) {
 ; CHECK-LABEL: insert_v4i32_2_2:
 ; CHECK:       // %bb.0:
-; CHECK-NEXT:    mov v0.16b, v1.16b
-; CHECK-NEXT:    mov v0.d[1], v2.d[0]
+; CHECK-NEXT:    zip1 v0.2d, v1.2d, v2.2d
 ; CHECK-NEXT:    ret
   %s2 = shufflevector <4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 4, i32 5, i32 0, i32 1>
   ret <4 x i32> %s2
@@ ...
[truncated]

Copy link
Collaborator

@davemgreen davemgreen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I always though we were or should be canonicalising shuffles so that the first index is for the LHS.

; CHECK-GI-NEXT: tbl.16b v0, { v31, v0 }, v2
; CHECK-GI-NEXT: // kill: def $d0 killed $d0 def $q0
; CHECK-GI-NEXT: // kill: def $d1 killed $d1 def $q1
; CHECK-GI-NEXT: zip1.16b v0, v1, v0
Copy link
Collaborator

@davemgreen davemgreen Nov 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to fix GISel too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very much for catching this!
I have updated matchZip in AArch64PostLegalizerLowering.cpp to flip the operands as needed.

@ginsbach
Copy link
Contributor Author

I always though we were or should be canonicalising shuffles so that the first index is for the LHS.

It is definitely possible to make clang -O2 -emit-llvm generate code where that is not the case (https://godbolt.org/z/Yx4TToebb):

#include <arm_neon.h>

const int8x8_t a = {0, 1, 2, 3, 4, 5, 6, 7};

int8x8_t f(int8x8_t x)
{
  return (int8x8_t) { x[0], a[0], x[1], a[1], x[2], a[2], x[3], a[3] };
}

generates

shufflevector <8 x i8> <i8 poison, i8 0, i8 poison, i8 1, i8 poison, i8 2, i8 poison, i8 3>,
              <8 x i8> %0,
              <8 x i32> <i32 8, i32 1, i32 9, i32 3, i32 10, i32 5, i32 11, i32 7>

@ginsbach ginsbach force-pushed the detect-zip-flipped-operands branch from b2ebf12 to 555d82c Compare November 10, 2025 20:19
@ginsbach ginsbach requested a review from davemgreen November 10, 2025 20:40
@ginsbach
Copy link
Contributor Author

ginsbach commented Nov 12, 2025

@davemgreen I think I have fixed the problem you spotted with GISel (second commit). Please let me know what you think - can this go ahead?

@ginsbach ginsbach force-pushed the detect-zip-flipped-operands branch from 555d82c to 47cfd4d Compare November 13, 2025 21:48
ginsbach added a commit to ginsbach/llvm-project that referenced this pull request Nov 18, 2025
When the first element of a trn mask is undef, the `isTRNMask` function
assumes `WhichResult = 1`. That has a 50% chance of being wrong, so we
fail to match some valid trn1/trn2.

This patch introduces a more precise test to determine the correct value
of `WhichResult`, based on corresponding code in the `isZIPMask` and
`isUZPMask` functions.

- This change is based on llvm#89578. I'd like to follow it up with a
further change along the lines of llvm#167235.
Copy link
Collaborator

@davemgreen davemgreen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I always though we were or should be canonicalising shuffles so that the first index is for the LHS.
It is definitely possible to make clang -O2 -emit-llvm generate code where that is not the case (https://godbolt.org/z/Yx4TToebb):

Yeah, it depends on the shuffle but many are not. I was wondering if we should - to avoid the need to try and match every pattern in multiple ways. It would fix them all at once, (and we could make the perfect shuffle tables smaller). There are DAG canonicalizations that get in the way though, for shuffle(x, shuffle), so this is probably OK.

return false;
Idx += 1;
if (M[i] >= 0) {
if ((unsigned)M[i] != i / 2)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps pull (unsigned)M[i] out into a separate variable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in commit 3.

/// <4, 12, 5, 13, 6, 14, 7, 15>
/// <4, 12, 5, 13, 6, 14, 7, 15> or
/// <8, 0, 9, 1, 10, 2, 11, 3> or
/// <12, 4, 13, 5, 14, 6, 15, 7>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add an explanation of WhichResultOut and OperandOrderOut.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in commit 4.

MacDue pushed a commit that referenced this pull request Nov 19, 2025
When the first element of a trn mask is undef, the `isTRNMask` function
assumes `WhichResult = 1`. That has a 50% chance of being wrong, so we
fail to match some valid trn1/trn2.

This patch introduces a more precise test to determine the correct value
of `WhichResult`, based on corresponding code in the `isZIPMask` and
`isUZPMask` functions.

- This change is based on #89578. I'd like to follow it up with a
further change along the lines of #167235.
@ginsbach ginsbach force-pushed the detect-zip-flipped-operands branch from 47cfd4d to 5c51547 Compare November 19, 2025 19:28
@ginsbach
Copy link
Contributor Author

I always though we were or should be canonicalising shuffles so that the first index is for the LHS.
It is definitely possible to make clang -O2 -emit-llvm generate code where that is not the case (https://godbolt.org/z/Yx4TToebb):

Yeah, it depends on the shuffle but many are not. I was wondering if we should - to avoid the need to try and match every pattern in multiple ways. It would fix them all at once, (and we could make the perfect shuffle tables smaller). There are DAG canonicalizations that get in the way though, for shuffle(x, shuffle), so this is probably OK.

Thank you for having another look!

I have implemented your two suggestions in commits 3 and 4.

I had to rebase and force-push due to minor merge conflicts with #167955 in three test cases (inside @large in insert-extend.ll and @v1 and @v2 in reduce-shuffle.ll).

I don't have the full context to comment on whether we should canonicalise more. However, it's worth noting that my initial motivation for this PR was the interaction with the slp-vectorizer through getShuffleCost. So to fix that kind of use case we would have to canonicalise very early.

@github-actions
Copy link

🐧 Linux x64 Test Results

  • 186398 tests passed
  • 4862 tests skipped

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants