Skip to content

Conversation

XChy
Copy link
Member

@XChy XChy commented Sep 14, 2025

This patch implements the fold lo(X * 1) + Z --> lo(X) + Z --> X iff X == lo(X).

@llvmbot
Copy link
Member

llvmbot commented Sep 14, 2025

@llvm/pr-subscribers-backend-x86

Author: Hongyu Chen (XChy)

Changes

This patch implements the fold lo(X * 1) + Z --> lo(X) + Z --> X iff X == lo(X).


Full diff: https://github.com/llvm/llvm-project/pull/158516.diff

2 Files Affected:

  • (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+20-5)
  • (modified) llvm/test/CodeGen/X86/combine-vpmadd52.ll (+57)
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp
index 3631016b0f5c7..00ccddc3956b4 100644
--- a/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -44958,6 +44958,7 @@ bool X86TargetLowering::SimplifyDemandedBitsForTargetNode(
   }
   case X86ISD::VPMADD52L:
   case X86ISD::VPMADD52H: {
+    KnownBits OrigKnownOp0, OrigKnownOp1;
     KnownBits KnownOp0, KnownOp1, KnownOp2;
     SDValue Op0 = Op.getOperand(0);
     SDValue Op1 = Op.getOperand(1);
@@ -44965,11 +44966,11 @@ bool X86TargetLowering::SimplifyDemandedBitsForTargetNode(
     //  Only demand the lower 52-bits of operands 0 / 1 (and all 64-bits of
     //  operand 2).
     APInt Low52Bits = APInt::getLowBitsSet(BitWidth, 52);
-    if (SimplifyDemandedBits(Op0, Low52Bits, OriginalDemandedElts, KnownOp0,
+    if (SimplifyDemandedBits(Op0, Low52Bits, OriginalDemandedElts, OrigKnownOp0,
                              TLO, Depth + 1))
       return true;
 
-    if (SimplifyDemandedBits(Op1, Low52Bits, OriginalDemandedElts, KnownOp1,
+    if (SimplifyDemandedBits(Op1, Low52Bits, OriginalDemandedElts, OrigKnownOp1,
                              TLO, Depth + 1))
       return true;
 
@@ -44978,19 +44979,33 @@ bool X86TargetLowering::SimplifyDemandedBitsForTargetNode(
       return true;
 
     KnownBits KnownMul;
-    KnownOp0 = KnownOp0.trunc(52);
-    KnownOp1 = KnownOp1.trunc(52);
+    KnownOp0 = OrigKnownOp0.trunc(52);
+    KnownOp1 = OrigKnownOp1.trunc(52);
     KnownMul = Opc == X86ISD::VPMADD52L ? KnownBits::mul(KnownOp0, KnownOp1)
                                         : KnownBits::mulhu(KnownOp0, KnownOp1);
     KnownMul = KnownMul.zext(64);
 
+    SDLoc DL(Op);
     // lo/hi(X * Y) + Z --> C + Z
     if (KnownMul.isConstant()) {
-      SDLoc DL(Op);
       SDValue C = TLO.DAG.getConstant(KnownMul.getConstant(), DL, VT);
       return TLO.CombineTo(Op, TLO.DAG.getNode(ISD::ADD, DL, VT, C, Op2));
     }
 
+    // C * X --> X * C
+    if (KnownOp0.isConstant()) {
+      std::swap(OrigKnownOp0, OrigKnownOp1);
+      std::swap(KnownOp0, KnownOp1);
+      std::swap(Op0, Op1);
+    }
+
+    // lo(X * 1) + Z --> lo(X) + Z --> X iff X == lo(X)
+    if (Opc == X86ISD::VPMADD52L && KnownOp1.isConstant() &&
+        KnownOp1.getConstant().isOne() &&
+        OrigKnownOp0.countMinLeadingZeros() >= 12) {
+      return TLO.CombineTo(Op, TLO.DAG.getNode(ISD::ADD, DL, VT, Op0, Op2));
+    }
+
     Known = KnownBits::add(KnownMul, KnownOp2);
     return false;
   }
diff --git a/llvm/test/CodeGen/X86/combine-vpmadd52.ll b/llvm/test/CodeGen/X86/combine-vpmadd52.ll
index 2cb060ea92b14..8b741e9ef9482 100644
--- a/llvm/test/CodeGen/X86/combine-vpmadd52.ll
+++ b/llvm/test/CodeGen/X86/combine-vpmadd52.ll
@@ -398,3 +398,60 @@ define <2 x i64> @test3_knownbits_vpmadd52h_negative(<2 x i64> %x0, <2 x i64> %x
   %ret = and <2 x i64> %madd, splat (i64 1)
   ret <2 x i64> %ret
 }
+
+define <2 x i64> @test_vpmadd52l_mul_one(<2 x i64> %x0, <2 x i32> %x1) {
+; CHECK-LABEL: test_vpmadd52l_mul_one:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vpmovzxdq {{.*#+}} xmm1 = xmm1[0],zero,xmm1[1],zero
+; CHECK-NEXT:    vpaddq %xmm0, %xmm1, %xmm0
+; CHECK-NEXT:    retq
+  %ext = zext <2 x i32> %x1 to <2 x i64>
+  %ifma = call <2 x i64> @llvm.x86.avx512.vpmadd52l.uq.128(<2 x i64> %x0, <2 x i64> splat(i64 1), <2 x i64> %ext)
+  ret <2 x i64> %ifma
+}
+
+define <2 x i64> @test_vpmadd52l_mul_one_commuted(<2 x i64> %x0, <2 x i32> %x1) {
+; CHECK-LABEL: test_vpmadd52l_mul_one_commuted:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vpmovzxdq {{.*#+}} xmm1 = xmm1[0],zero,xmm1[1],zero
+; CHECK-NEXT:    vpaddq %xmm0, %xmm1, %xmm0
+; CHECK-NEXT:    retq
+  %ext = zext <2 x i32> %x1 to <2 x i64>
+  %ifma = call <2 x i64> @llvm.x86.avx512.vpmadd52l.uq.128(<2 x i64> %x0, <2 x i64> %ext, <2 x i64> splat(i64 1))
+  ret <2 x i64> %ifma
+}
+
+define <2 x i64> @test_vpmadd52l_mul_one_no_mask(<2 x i64> %x0, <2 x i64> %x1) {
+; AVX512-LABEL: test_vpmadd52l_mul_one_no_mask:
+; AVX512:       # %bb.0:
+; AVX512-NEXT:    vpmadd52luq {{\.?LCPI[0-9]+_[0-9]+}}(%rip){1to2}, %xmm1, %xmm0
+; AVX512-NEXT:    retq
+;
+; AVX-LABEL: test_vpmadd52l_mul_one_no_mask:
+; AVX:       # %bb.0:
+; AVX-NEXT:    {vex} vpmadd52luq {{\.?LCPI[0-9]+_[0-9]+}}(%rip), %xmm1, %xmm0
+; AVX-NEXT:    retq
+  %ifma = call <2 x i64> @llvm.x86.avx512.vpmadd52l.uq.128(<2 x i64> %x0, <2 x i64> splat(i64 1), <2 x i64> %x1)
+  ret <2 x i64> %ifma
+}
+
+; Mul by (1 << 52) + 1
+define <2 x i64> @test_vpmadd52l_mul_one_in_52bits(<2 x i64> %x0, <2 x i32> %x1) {
+; CHECK-LABEL: test_vpmadd52l_mul_one_in_52bits:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vpmovzxdq {{.*#+}} xmm1 = xmm1[0],zero,xmm1[1],zero
+; CHECK-NEXT:    vpaddq %xmm0, %xmm1, %xmm0
+; CHECK-NEXT:    retq
+  %ext = zext <2 x i32> %x1 to <2 x i64>
+  %ifma = call <2 x i64> @llvm.x86.avx512.vpmadd52l.uq.128(<2 x i64> %x0, <2 x i64> splat(i64 4503599627370497), <2 x i64> %ext)
+  ret <2 x i64> %ifma
+}
+
+; lo(x1) * 1 = lo(x1), the high 52 bits are zeroes still.
+define <2 x i64> @test_vpmadd52h_mul_one(<2 x i64> %x0, <2 x i64> %x1) {
+; CHECK-LABEL: test_vpmadd52h_mul_one:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    retq
+  %ifma = call <2 x i64> @llvm.x86.avx512.vpmadd52h.uq.128(<2 x i64> %x0, <2 x i64> splat(i64 1), <2 x i64> %x1)
+  ret <2 x i64> %ifma
+}

@RKSimon RKSimon changed the title [X86] Fold X * 1 + Z --> X + Z for VPADD52L [X86] Fold X * 1 + Z --> X + Z for VPMADD52L Sep 15, 2025
Copy link
Collaborator

@RKSimon RKSimon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be tempted to handle all this in combineVPMADD52LH where it should be cheaper:

  • Canonicalize constant multiplicands to the RHS
  • Handle lo(X * 1) + Z --> lo(X) + Z iff X == lo(X) pattern - nothing about it relies on SimplifyDemandedBits

@XChy
Copy link
Member Author

XChy commented Sep 19, 2025

Gently ping. @RKSimon

Copy link
Collaborator

@RKSimon RKSimon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - cheers

@XChy XChy merged commit fba55c8 into llvm:main Sep 19, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants