Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AMDGPU] Fold uniform readfirstlane + cndmask #70188

Closed
wants to merge 4 commits into from

Conversation

Pierre-vh
Copy link
Contributor

@Pierre-vh Pierre-vh commented Oct 25, 2023

Teach SIFoldOperand to fold CNDMASK + READFIRSTLANE + S_CSELECT

(Alternative patch for llvm#69703)

Teach SIFoldOperand to fold the a/zext DAGISel pattern that always emits a CNDMASK + READFIRSTLANE, even for uniform comparisons.

Fixes llvm#59869
@llvmbot
Copy link
Collaborator

llvmbot commented Oct 25, 2023

@llvm/pr-subscribers-backend-amdgpu

Author: Pierre van Houtryve (Pierre-vh)

Changes

(Alternative patch for #69703)

Teach SIFoldOperand to fold the a/zext DAGISel pattern that always emits a CNDMASK + READFIRSTLANE, even for uniform comparisons.

Fixes #59869


Patch is 43.05 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/70188.diff

5 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/SIFoldOperands.cpp (+88)
  • (modified) llvm/test/CodeGen/AMDGPU/fcopysign.f16.ll (+36-41)
  • (modified) llvm/test/CodeGen/AMDGPU/fptrunc.ll (+64-69)
  • (added) llvm/test/CodeGen/AMDGPU/si-fold-readfirstlane-cndmask-w32.mir (+241)
  • (added) llvm/test/CodeGen/AMDGPU/si-fold-readfirstlane-cndmask-w64.mir (+241)
diff --git a/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp b/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp
index 1ebfa297f4fc339..d4c652eda715b80 100644
--- a/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp
+++ b/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp
@@ -104,6 +104,7 @@ class SIFoldOperands : public MachineFunctionPass {
   bool foldInstOperand(MachineInstr &MI, MachineOperand &OpToFold) const;
   bool tryFoldFoldableCopy(MachineInstr &MI,
                            MachineOperand *&CurrentKnownM0Val) const;
+  bool tryFoldUniformReadFirstLaneCndMask(MachineInstr &MI) const;
 
   const MachineOperand *isClamp(const MachineInstr &MI) const;
   bool tryFoldClamp(MachineInstr &MI);
@@ -1400,6 +1401,88 @@ bool SIFoldOperands::tryFoldFoldableCopy(
   return Changed;
 }
 
+// Try to fold the following pattern:
+//    s_cselect s[2:3], K, 0          ; K has LSB set. Usually it's +-1.
+//    v_cndmask v0, 0, +-1, s[2:3]
+//    v_readfirstlane s0, v0
+//
+// into (for example)
+//
+//    s_cselect s[2:3], K, 0
+//    s_bfe_u64 s0, s[2:3], 0x10000
+bool SIFoldOperands::tryFoldUniformReadFirstLaneCndMask(
+    MachineInstr &MI) const {
+  if (MI.getOpcode() != AMDGPU::V_READFIRSTLANE_B32)
+    return false;
+
+  MachineInstr *RFLSrc = MRI->getVRegDef(MI.getOperand(1).getReg());
+  // We can also have the following pattern:
+  //
+  // %2:vreg_64 = REG_SEQUENCE %X:vgpr_32, sub0, %1:sreg_32, sub1
+  // %3:sgpr_32 = V_READFIRSTLANE_B32 %2.sub0:vreg_64
+  //
+  // In this case we dig into %X or %Y depending on which sub register
+  // the V_READFIRSTLANE accesses.
+  if (RFLSrc->isRegSequence()) {
+    unsigned RFLSubReg = MI.getOperand(1).getSubReg();
+    if (RFLSrc->getNumOperands() != 5)
+      return false;
+
+    if (RFLSrc->getOperand(2).getImm() == RFLSubReg)
+      RFLSrc = MRI->getVRegDef(RFLSrc->getOperand(1).getReg());
+    else if (RFLSrc->getOperand(4).getImm() == RFLSubReg)
+      RFLSrc = MRI->getVRegDef(RFLSrc->getOperand(3).getReg());
+    else
+      return false;
+  }
+
+  // Need e64 to have a SGPR regmask.
+  if (!RFLSrc || RFLSrc->getOpcode() != AMDGPU::V_CNDMASK_B32_e64)
+    return false;
+
+  MachineOperand *Src0 = TII->getNamedOperand(*RFLSrc, AMDGPU::OpName::src0);
+  MachineOperand *Src1 = TII->getNamedOperand(*RFLSrc, AMDGPU::OpName::src1);
+  Register Src2 = TII->getNamedOperand(*RFLSrc, AMDGPU::OpName::src2)->getReg();
+
+  if (!Src0->isImm() || Src0->getImm() != 0 || !Src1->isImm())
+    return false;
+
+  // This pattern usually comes from a ext. sext uses -1.
+  bool IsSigned = false;
+  if (Src1->getImm() == -1)
+    IsSigned = true;
+  else if (Src1->getImm() != 1)
+    return false;
+
+  MachineInstr *CSel = MRI->getVRegDef(Src2);
+  if (!CSel || (CSel->getOpcode() != AMDGPU::S_CSELECT_B32 &&
+                CSel->getOpcode() != AMDGPU::S_CSELECT_B64))
+    return false;
+
+  MachineOperand *CSelSrc0 = TII->getNamedOperand(*CSel, AMDGPU::OpName::src0);
+  MachineOperand *CSelSrc1 = TII->getNamedOperand(*CSel, AMDGPU::OpName::src1);
+  // Note: we could also allow any non-zero value for CSelSrc0, and adapt the
+  // BFE's mask depending on where the first set bit is.
+  if (!CSelSrc0->isImm() || (CSelSrc0->getImm() & 1) == 0 ||
+      !CSelSrc1->isImm() || CSelSrc1->getImm() != 0)
+    return false;
+
+  // Replace the V_CNDMASK with S_BFE.
+  unsigned BFEOpc = (IsSigned ? AMDGPU::S_BFE_I32 : AMDGPU::S_BFE_U32);
+
+  // If the CSELECT writes to a 64 bit SGPR, only pick the low bits.
+  unsigned SubReg = 0;
+  if (CSel->getOpcode() == AMDGPU::S_CSELECT_B64)
+    SubReg = AMDGPU::sub0;
+
+  BuildMI(*MI.getParent(), MI, MI.getDebugLoc(), TII->get(BFEOpc),
+          MI.getOperand(0).getReg())
+      .addReg(Src2, /*Flags*/ 0, SubReg)
+      .addImm(0x10000);
+  MI.eraseFromParent();
+  return true;
+}
+
 // Clamp patterns are canonically selected to v_max_* instructions, so only
 // handle them.
 const MachineOperand *SIFoldOperands::isClamp(const MachineInstr &MI) const {
@@ -2087,6 +2170,11 @@ bool SIFoldOperands::runOnMachineFunction(MachineFunction &MF) {
         continue;
       }
 
+      if (tryFoldUniformReadFirstLaneCndMask(MI)) {
+        Changed = true;
+        continue;
+      }
+
       // Saw an unknown clobber of m0, so we no longer know what it is.
       if (CurrentKnownM0Val && MI.modifiesRegister(AMDGPU::M0, TRI))
         CurrentKnownM0Val = nullptr;
diff --git a/llvm/test/CodeGen/AMDGPU/fcopysign.f16.ll b/llvm/test/CodeGen/AMDGPU/fcopysign.f16.ll
index 667c561ea26f6f6..52e356a565a5b48 100644
--- a/llvm/test/CodeGen/AMDGPU/fcopysign.f16.ll
+++ b/llvm/test/CodeGen/AMDGPU/fcopysign.f16.ll
@@ -1536,8 +1536,7 @@ define amdgpu_kernel void @s_copysign_out_f16_mag_f64_sign_f16(ptr addrspace(1)
 ; SI-NEXT:    s_or_b32 s2, s5, s2
 ; SI-NEXT:    s_cmp_lg_u32 s2, 0
 ; SI-NEXT:    s_cselect_b64 s[4:5], -1, 0
-; SI-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s[4:5]
-; SI-NEXT:    v_readfirstlane_b32 s2, v1
+; SI-NEXT:    s_bfe_u32 s2, s4, 0x10000
 ; SI-NEXT:    s_bfe_u32 s5, s3, 0xb0014
 ; SI-NEXT:    s_or_b32 s2, s6, s2
 ; SI-NEXT:    s_sub_i32 s6, 0x3f1, s5
@@ -1599,8 +1598,7 @@ define amdgpu_kernel void @s_copysign_out_f16_mag_f64_sign_f16(ptr addrspace(1)
 ; VI-NEXT:    s_or_b32 s0, s1, s6
 ; VI-NEXT:    s_cmp_lg_u32 s0, 0
 ; VI-NEXT:    s_cselect_b64 s[0:1], -1, 0
-; VI-NEXT:    v_cndmask_b32_e64 v2, 0, 1, s[0:1]
-; VI-NEXT:    v_readfirstlane_b32 s0, v2
+; VI-NEXT:    s_bfe_u32 s0, s0, 0x10000
 ; VI-NEXT:    s_bfe_u32 s1, s7, 0xb0014
 ; VI-NEXT:    v_mov_b32_e32 v0, s4
 ; VI-NEXT:    s_or_b32 s4, s2, s0
@@ -1661,8 +1659,7 @@ define amdgpu_kernel void @s_copysign_out_f16_mag_f64_sign_f16(ptr addrspace(1)
 ; GFX9-NEXT:    s_or_b32 s0, s1, s6
 ; GFX9-NEXT:    s_cmp_lg_u32 s0, 0
 ; GFX9-NEXT:    s_cselect_b64 s[0:1], -1, 0
-; GFX9-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s[0:1]
-; GFX9-NEXT:    v_readfirstlane_b32 s0, v1
+; GFX9-NEXT:    s_bfe_u32 s0, s0, 0x10000
 ; GFX9-NEXT:    s_bfe_u32 s1, s7, 0xb0014
 ; GFX9-NEXT:    s_or_b32 s6, s2, s0
 ; GFX9-NEXT:    s_sub_i32 s2, 0x3f1, s1
@@ -1714,6 +1711,7 @@ define amdgpu_kernel void @s_copysign_out_f16_mag_f64_sign_f16(ptr addrspace(1)
 ; GFX11-NEXT:    s_clause 0x1
 ; GFX11-NEXT:    s_load_b128 s[4:7], s[0:1], 0x24
 ; GFX11-NEXT:    s_load_b32 s0, s[0:1], 0x34
+; GFX11-NEXT:    v_mov_b32_e32 v1, 0
 ; GFX11-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX11-NEXT:    s_and_b32 s1, s7, 0x1ff
 ; GFX11-NEXT:    s_lshr_b32 s2, s7, 8
@@ -1721,48 +1719,45 @@ define amdgpu_kernel void @s_copysign_out_f16_mag_f64_sign_f16(ptr addrspace(1)
 ; GFX11-NEXT:    s_and_b32 s2, s2, 0xffe
 ; GFX11-NEXT:    s_cmp_lg_u32 s1, 0
 ; GFX11-NEXT:    s_cselect_b32 s1, -1, 0
-; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
-; GFX11-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s1
-; GFX11-NEXT:    s_bfe_u32 s1, s7, 0xb0014
-; GFX11-NEXT:    s_sub_i32 s3, 0x3f1, s1
-; GFX11-NEXT:    s_addk_i32 s1, 0xfc10
-; GFX11-NEXT:    v_med3_i32 v1, s3, 0, 13
-; GFX11-NEXT:    v_readfirstlane_b32 s3, v0
-; GFX11-NEXT:    s_lshl_b32 s8, s1, 12
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_2)
-; GFX11-NEXT:    v_readfirstlane_b32 s6, v1
-; GFX11-NEXT:    s_or_b32 s2, s2, s3
-; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
-; GFX11-NEXT:    s_or_b32 s3, s2, 0x1000
-; GFX11-NEXT:    s_or_b32 s8, s2, s8
-; GFX11-NEXT:    s_lshr_b32 s6, s3, s6
-; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_1) | instid1(VALU_DEP_2)
-; GFX11-NEXT:    v_lshlrev_b32_e64 v0, v1, s6
-; GFX11-NEXT:    v_mov_b32_e32 v1, 0
-; GFX11-NEXT:    v_cmp_ne_u32_e32 vcc_lo, s3, v0
+; GFX11-NEXT:    s_bfe_u32 s3, s7, 0xb0014
+; GFX11-NEXT:    s_bfe_u32 s1, s1, 0x10000
+; GFX11-NEXT:    s_sub_i32 s6, 0x3f1, s3
+; GFX11-NEXT:    s_or_b32 s1, s2, s1
+; GFX11-NEXT:    v_med3_i32 v0, s6, 0, 13
+; GFX11-NEXT:    s_or_b32 s2, s1, 0x1000
+; GFX11-NEXT:    s_addk_i32 s3, 0xfc10
+; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX11-NEXT:    s_lshl_b32 s8, s3, 12
+; GFX11-NEXT:    v_readfirstlane_b32 s6, v0
+; GFX11-NEXT:    s_or_b32 s8, s1, s8
+; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(SALU_CYCLE_1)
+; GFX11-NEXT:    s_lshr_b32 s6, s2, s6
+; GFX11-NEXT:    v_lshlrev_b32_e64 v0, v0, s6
+; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX11-NEXT:    v_cmp_ne_u32_e32 vcc_lo, s2, v0
 ; GFX11-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc_lo
-; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
-; GFX11-NEXT:    v_readfirstlane_b32 s3, v0
-; GFX11-NEXT:    s_or_b32 s3, s6, s3
-; GFX11-NEXT:    s_cmp_lt_i32 s1, 1
-; GFX11-NEXT:    s_cselect_b32 s3, s3, s8
-; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(SALU_CYCLE_1)
-; GFX11-NEXT:    s_and_b32 s6, s3, 7
+; GFX11-NEXT:    v_readfirstlane_b32 s2, v0
+; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_2) | instid1(SALU_CYCLE_1)
+; GFX11-NEXT:    s_or_b32 s2, s6, s2
+; GFX11-NEXT:    s_cmp_lt_i32 s3, 1
+; GFX11-NEXT:    s_cselect_b32 s2, s2, s8
+; GFX11-NEXT:    s_and_b32 s6, s2, 7
+; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1)
 ; GFX11-NEXT:    s_cmp_gt_i32 s6, 5
 ; GFX11-NEXT:    s_cselect_b32 s8, -1, 0
 ; GFX11-NEXT:    s_cmp_eq_u32 s6, 3
 ; GFX11-NEXT:    s_cselect_b32 s6, -1, 0
-; GFX11-NEXT:    s_lshr_b32 s3, s3, 2
+; GFX11-NEXT:    s_lshr_b32 s2, s2, 2
 ; GFX11-NEXT:    s_or_b32 s6, s6, s8
 ; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1)
 ; GFX11-NEXT:    s_cmp_lg_u32 s6, 0
-; GFX11-NEXT:    s_addc_u32 s3, s3, 0
-; GFX11-NEXT:    s_cmp_lt_i32 s1, 31
-; GFX11-NEXT:    s_cselect_b32 s3, s3, 0x7c00
-; GFX11-NEXT:    s_cmp_lg_u32 s2, 0
-; GFX11-NEXT:    s_cselect_b32 s2, -1, 0
-; GFX11-NEXT:    s_cmpk_eq_i32 s1, 0x40f
-; GFX11-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s2
+; GFX11-NEXT:    s_addc_u32 s2, s2, 0
+; GFX11-NEXT:    s_cmp_lt_i32 s3, 31
+; GFX11-NEXT:    s_cselect_b32 s2, s2, 0x7c00
+; GFX11-NEXT:    s_cmp_lg_u32 s1, 0
+; GFX11-NEXT:    s_cselect_b32 s1, -1, 0
+; GFX11-NEXT:    s_cmpk_eq_i32 s3, 0x40f
+; GFX11-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s1
 ; GFX11-NEXT:    s_cselect_b32 vcc_lo, -1, 0
 ; GFX11-NEXT:    s_lshr_b32 s1, s7, 16
 ; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
@@ -1770,7 +1765,7 @@ define amdgpu_kernel void @s_copysign_out_f16_mag_f64_sign_f16(ptr addrspace(1)
 ; GFX11-NEXT:    v_lshlrev_b32_e32 v0, 9, v0
 ; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
 ; GFX11-NEXT:    v_or_b32_e32 v0, 0x7c00, v0
-; GFX11-NEXT:    v_cndmask_b32_e32 v0, s3, v0, vcc_lo
+; GFX11-NEXT:    v_cndmask_b32_e32 v0, s2, v0, vcc_lo
 ; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
 ; GFX11-NEXT:    v_or_b32_e32 v0, s1, v0
 ; GFX11-NEXT:    v_bfi_b32 v0, 0x7fff, v0, s0
diff --git a/llvm/test/CodeGen/AMDGPU/fptrunc.ll b/llvm/test/CodeGen/AMDGPU/fptrunc.ll
index 97216b6c94693c4..9785229c381c556 100644
--- a/llvm/test/CodeGen/AMDGPU/fptrunc.ll
+++ b/llvm/test/CodeGen/AMDGPU/fptrunc.ll
@@ -111,12 +111,11 @@ define amdgpu_kernel void @fptrunc_f64_to_f16(ptr addrspace(1) %out, double %in)
 ; SI-NEXT:    s_or_b32 s4, s5, s6
 ; SI-NEXT:    s_cmp_lg_u32 s4, 0
 ; SI-NEXT:    s_cselect_b64 s[4:5], -1, 0
-; SI-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s[4:5]
-; SI-NEXT:    s_bfe_u32 s4, s7, 0xb0014
-; SI-NEXT:    v_readfirstlane_b32 s5, v0
-; SI-NEXT:    s_sub_i32 s6, 0x3f1, s4
-; SI-NEXT:    s_add_i32 s10, s4, 0xfffffc10
-; SI-NEXT:    s_or_b32 s11, s8, s5
+; SI-NEXT:    s_bfe_u32 s5, s7, 0xb0014
+; SI-NEXT:    s_bfe_u32 s4, s4, 0x10000
+; SI-NEXT:    s_sub_i32 s6, 0x3f1, s5
+; SI-NEXT:    s_add_i32 s10, s5, 0xfffffc10
+; SI-NEXT:    s_or_b32 s11, s8, s4
 ; SI-NEXT:    v_med3_i32 v0, s6, 0, 13
 ; SI-NEXT:    s_lshl_b32 s4, s10, 12
 ; SI-NEXT:    s_or_b32 s5, s11, 0x1000
@@ -171,8 +170,7 @@ define amdgpu_kernel void @fptrunc_f64_to_f16(ptr addrspace(1) %out, double %in)
 ; VI-SAFE-SDAG-NEXT:    s_cmp_lg_u32 s4, 0
 ; VI-SAFE-SDAG-NEXT:    s_mov_b32 s1, s5
 ; VI-SAFE-SDAG-NEXT:    s_cselect_b64 s[4:5], -1, 0
-; VI-SAFE-SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s[4:5]
-; VI-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s4, v0
+; VI-SAFE-SDAG-NEXT:    s_bfe_u32 s4, s4, 0x10000
 ; VI-SAFE-SDAG-NEXT:    s_bfe_u32 s5, s7, 0xb0014
 ; VI-SAFE-SDAG-NEXT:    s_or_b32 s6, s8, s4
 ; VI-SAFE-SDAG-NEXT:    s_sub_i32 s8, 0x3f1, s5
@@ -299,47 +297,46 @@ define amdgpu_kernel void @fptrunc_f64_to_f16(ptr addrspace(1) %out, double %in)
 ; GFX10-SAFE-SDAG-NEXT:    s_and_b32 s4, s5, 0xffe
 ; GFX10-SAFE-SDAG-NEXT:    s_cmp_lg_u32 s2, 0
 ; GFX10-SAFE-SDAG-NEXT:    s_cselect_b32 s2, -1, 0
-; GFX10-SAFE-SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s2
-; GFX10-SAFE-SDAG-NEXT:    s_bfe_u32 s2, s3, 0xb0014
-; GFX10-SAFE-SDAG-NEXT:    s_sub_i32 s5, 0x3f1, s2
-; GFX10-SAFE-SDAG-NEXT:    s_addk_i32 s2, 0xfc10
-; GFX10-SAFE-SDAG-NEXT:    v_med3_i32 v1, s5, 0, 13
-; GFX10-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s5, v0
-; GFX10-SAFE-SDAG-NEXT:    s_lshl_b32 s7, s2, 12
-; GFX10-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s6, v1
-; GFX10-SAFE-SDAG-NEXT:    s_or_b32 s4, s4, s5
-; GFX10-SAFE-SDAG-NEXT:    s_or_b32 s5, s4, 0x1000
-; GFX10-SAFE-SDAG-NEXT:    s_or_b32 s7, s4, s7
-; GFX10-SAFE-SDAG-NEXT:    s_lshr_b32 s6, s5, s6
-; GFX10-SAFE-SDAG-NEXT:    v_lshlrev_b32_e64 v0, v1, s6
-; GFX10-SAFE-SDAG-NEXT:    v_cmp_ne_u32_e32 vcc_lo, s5, v0
+; GFX10-SAFE-SDAG-NEXT:    s_bfe_u32 s5, s3, 0xb0014
+; GFX10-SAFE-SDAG-NEXT:    s_bfe_u32 s2, s2, 0x10000
+; GFX10-SAFE-SDAG-NEXT:    s_sub_i32 s6, 0x3f1, s5
+; GFX10-SAFE-SDAG-NEXT:    s_or_b32 s2, s4, s2
+; GFX10-SAFE-SDAG-NEXT:    v_med3_i32 v0, s6, 0, 13
+; GFX10-SAFE-SDAG-NEXT:    s_or_b32 s4, s2, 0x1000
+; GFX10-SAFE-SDAG-NEXT:    s_addk_i32 s5, 0xfc10
+; GFX10-SAFE-SDAG-NEXT:    s_lshl_b32 s7, s5, 12
+; GFX10-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s6, v0
+; GFX10-SAFE-SDAG-NEXT:    s_or_b32 s7, s2, s7
+; GFX10-SAFE-SDAG-NEXT:    s_lshr_b32 s6, s4, s6
+; GFX10-SAFE-SDAG-NEXT:    v_lshlrev_b32_e64 v0, v0, s6
+; GFX10-SAFE-SDAG-NEXT:    v_cmp_ne_u32_e32 vcc_lo, s4, v0
 ; GFX10-SAFE-SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc_lo
-; GFX10-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s5, v0
-; GFX10-SAFE-SDAG-NEXT:    s_or_b32 s5, s6, s5
-; GFX10-SAFE-SDAG-NEXT:    s_cmp_lt_i32 s2, 1
-; GFX10-SAFE-SDAG-NEXT:    s_cselect_b32 s5, s5, s7
-; GFX10-SAFE-SDAG-NEXT:    s_and_b32 s6, s5, 7
+; GFX10-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s4, v0
+; GFX10-SAFE-SDAG-NEXT:    s_or_b32 s4, s6, s4
+; GFX10-SAFE-SDAG-NEXT:    s_cmp_lt_i32 s5, 1
+; GFX10-SAFE-SDAG-NEXT:    s_cselect_b32 s4, s4, s7
+; GFX10-SAFE-SDAG-NEXT:    s_and_b32 s6, s4, 7
 ; GFX10-SAFE-SDAG-NEXT:    s_cmp_gt_i32 s6, 5
 ; GFX10-SAFE-SDAG-NEXT:    s_cselect_b32 s7, -1, 0
 ; GFX10-SAFE-SDAG-NEXT:    s_cmp_eq_u32 s6, 3
 ; GFX10-SAFE-SDAG-NEXT:    s_cselect_b32 s6, -1, 0
-; GFX10-SAFE-SDAG-NEXT:    s_lshr_b32 s5, s5, 2
+; GFX10-SAFE-SDAG-NEXT:    s_lshr_b32 s4, s4, 2
 ; GFX10-SAFE-SDAG-NEXT:    s_or_b32 s6, s6, s7
 ; GFX10-SAFE-SDAG-NEXT:    s_cmp_lg_u32 s6, 0
-; GFX10-SAFE-SDAG-NEXT:    s_addc_u32 s5, s5, 0
-; GFX10-SAFE-SDAG-NEXT:    s_cmp_lt_i32 s2, 31
-; GFX10-SAFE-SDAG-NEXT:    s_cselect_b32 s5, s5, 0x7c00
-; GFX10-SAFE-SDAG-NEXT:    s_cmp_lg_u32 s4, 0
-; GFX10-SAFE-SDAG-NEXT:    s_cselect_b32 s4, -1, 0
-; GFX10-SAFE-SDAG-NEXT:    s_cmpk_eq_i32 s2, 0x40f
-; GFX10-SAFE-SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s4
+; GFX10-SAFE-SDAG-NEXT:    s_addc_u32 s4, s4, 0
+; GFX10-SAFE-SDAG-NEXT:    s_cmp_lt_i32 s5, 31
+; GFX10-SAFE-SDAG-NEXT:    s_cselect_b32 s4, s4, 0x7c00
+; GFX10-SAFE-SDAG-NEXT:    s_cmp_lg_u32 s2, 0
+; GFX10-SAFE-SDAG-NEXT:    s_cselect_b32 s2, -1, 0
+; GFX10-SAFE-SDAG-NEXT:    s_cmpk_eq_i32 s5, 0x40f
+; GFX10-SAFE-SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s2
 ; GFX10-SAFE-SDAG-NEXT:    s_cselect_b32 vcc_lo, -1, 0
 ; GFX10-SAFE-SDAG-NEXT:    s_lshr_b32 s2, s3, 16
 ; GFX10-SAFE-SDAG-NEXT:    s_mov_b32 s3, 0x31016000
 ; GFX10-SAFE-SDAG-NEXT:    s_and_b32 s2, s2, 0x8000
 ; GFX10-SAFE-SDAG-NEXT:    v_lshlrev_b32_e32 v0, 9, v0
 ; GFX10-SAFE-SDAG-NEXT:    v_or_b32_e32 v0, 0x7c00, v0
-; GFX10-SAFE-SDAG-NEXT:    v_cndmask_b32_e32 v0, s5, v0, vcc_lo
+; GFX10-SAFE-SDAG-NEXT:    v_cndmask_b32_e32 v0, s4, v0, vcc_lo
 ; GFX10-SAFE-SDAG-NEXT:    v_or_b32_e32 v0, s2, v0
 ; GFX10-SAFE-SDAG-NEXT:    s_mov_b32 s2, -1
 ; GFX10-SAFE-SDAG-NEXT:    buffer_store_short v0, off, s[0:3], 0
@@ -428,47 +425,45 @@ define amdgpu_kernel void @fptrunc_f64_to_f16(ptr addrspace(1) %out, double %in)
 ; GFX11-SAFE-SDAG-NEXT:    s_and_b32 s4, s5, 0xffe
 ; GFX11-SAFE-SDAG-NEXT:    s_cmp_lg_u32 s2, 0
 ; GFX11-SAFE-SDAG-NEXT:    s_cselect_b32 s2, -1, 0
-; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_1) | instid1(SALU_CYCLE_1)
-; GFX11-SAFE-SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, s2
-; GFX11-SAFE-SDAG-NEXT:    s_bfe_u32 s2, s3, 0xb0014
-; GFX11-SAFE-SDAG-NEXT:    s_sub_i32 s5, 0x3f1, s2
-; GFX11-SAFE-SDAG-NEXT:    s_addk_i32 s2, 0xfc10
-; GFX11-SAFE-SDAG-NEXT:    v_med3_i32 v1, s5, 0, 13
-; GFX11-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s5, v0
-; GFX11-SAFE-SDAG-NEXT:    s_lshl_b32 s7, s2, 12
-; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_2)
-; GFX11-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s6, v1
-; GFX11-SAFE-SDAG-NEXT:    s_or_b32 s4, s4, s5
-; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
-; GFX11-SAFE-SDAG-NEXT:    s_or_b32 s5, s4, 0x1000
-; GFX11-SAFE-SDAG-NEXT:    s_or_b32 s7, s4, s7
-; GFX11-SAFE-SDAG-NEXT:    s_lshr_b32 s6, s5, s6
+; GFX11-SAFE-SDAG-NEXT:    s_bfe_u32 s5, s3, 0xb0014
+; GFX11-SAFE-SDAG-NEXT:    s_bfe_u32 s2, s2, 0x10000
+; GFX11-SAFE-SDAG-NEXT:    s_sub_i32 s6, 0x3f1, s5
+; GFX11-SAFE-SDAG-NEXT:    s_or_b32 s2, s4, s2
+; GFX11-SAFE-SDAG-NEXT:    v_med3_i32 v0, s6, 0, 13
+; GFX11-SAFE-SDAG-NEXT:    s_or_b32 s4, s2, 0x1000
+; GFX11-SAFE-SDAG-NEXT:    s_addk_i32 s5, 0xfc10
 ; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
-; GFX11-SAFE-SDAG-NEXT:    v_lshlrev_b32_e64 v0, v1, s6
-; GFX11-SAFE-SDAG-NEXT:    v_cmp_ne_u32_e32 vcc_lo, s5, v0
+; GFX11-SAFE-SDAG-NEXT:    s_lshl_b32 s7, s5, 12
+; GFX11-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s6, v0
+; GFX11-SAFE-SDAG-NEXT:    s_or_b32 s7, s2, s7
+; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(SALU_CYCLE_1)
+; GFX11-SAFE-SDAG-NEXT:    s_lshr_b32 s6, s4, s6
+; GFX11-SAFE-SDAG-NEXT:    v_lshlrev_b32_e64 v0, v0, s6
+; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_1)
+; GFX11-SAFE-SDAG-NEXT:    v_cmp_ne_u32_e32 vcc_lo, s4, v0
 ; GFX11-SAFE-SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc_lo
-; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
-; GFX11-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s5, v0
-; GFX11-SAFE-SDAG-NEXT:    s_or_b32 s5, s6, s5
-; GFX11-SAFE-SDAG-NEXT:    s_cmp_lt_i32 s2, 1
-; GFX11-SAFE-SDAG-NEXT:    s_cselect_b32 s5, s5, s7
-; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(SALU_CYCLE_1)
-; GFX11-SAFE-SDAG-NEXT:    s_and_b32 s6, s5, 7
+; GFX11-SAFE-SDAG-NEXT:    v_readfirstlane_b32 s4, v0
+; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_2) | instid1(SALU_CYCLE_1)
+; GFX11-SAFE-SDAG-NEXT:    s_or_b32 s4, s6, s4
+; GFX11-SAFE-SDAG-NEXT:    s_cmp_lt_i32 s5, 1
+; GFX11-SAFE-SDAG-NEXT:    s_cselect_b32 s4, s4, s7
+; GFX11-SAFE-SDAG-NEXT:    s_and_b32 s6, s4, 7
+; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(SALU_CYCLE_1)
 ; GFX11-SAFE-SDAG-NEXT:    s_cmp_gt_i32 s6, 5
 ; GFX11-SAFE-SDAG-NEXT:    s_cselect_b32 s7, -1, 0
 ; GFX11-SAFE-SDAG-NEXT:    s_cmp_eq_u32 s6, 3
 ; GFX11-SAFE-SDAG-NEXT:    s_cselect_b32 s6, -1, 0
-; GFX11-SAFE-SDAG-NEXT:    s_lshr_b32 s5, s5, 2
+; GFX11-SAFE-SDAG-NEXT:    s_lshr_b32 s4, s4, 2
 ; GFX11-SAFE-SDAG-NEXT:    s_or_b32 s6, s6, s7
 ; GFX11-SAFE-SDAG-NEXT:    s_delay_alu instid0(SALU_CYCLE_1)
 ; GFX11-SAFE-SDAG-NEXT:    s_cmp_lg_u32 s6, 0
-; GFX11-SAFE-SDAG-NEXT:    s_addc_u32 s5, s5, 0
-; GFX11-SAFE-SDAG-NEXT:    s_cmp_lt_i32 s2, 31
-; GFX11-SAFE-SDAG-NEXT:    s_cselect_b32 s5, s5, 0x7c00
-; GFX11-SAFE-SDAG-NEXT:    s_cmp_lg_u32 s4, 0
-; GFX11-SAFE-SDAG-NEXT:    s_cselect_b32 s4, -1, 0
-; GFX11-SAFE-SDAG-NEXT:    s_cmpk_eq_i32 s2, 0x40f
-; GFX11-SAFE-SDAG-NEXT:    v...
[truncated]

llvm/lib/Target/AMDGPU/SIFoldOperands.cpp Show resolved Hide resolved
Comment on lines 1451 to 1455
bool IsSigned = false;
if (Src1->getImm() == -1)
IsSigned = true;
else if (Src1->getImm() != 1)
return false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IsSigned = Src1->getImm() == -1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also handle rejecting Src1 != -1/1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but you can check that after, if != 1 && !IsSigned

@github-actions
Copy link

github-actions bot commented Oct 25, 2023

✅ With the latest revision this PR passed the C/C++ code formatter.

@Pierre-vh
Copy link
Contributor Author

@arsenm Jay found a way to make the pattern in #69703 work so this in not needed to fix #59869

Should I abandon this, or is this still a worthwhile addition? There's some positive code changes but not many tests are affected so not sure it's worth it. OTOH the code's already done so it doesn't hurt to just land this too.
No strong opinion.

@arsenm
Copy link
Contributor

arsenm commented Oct 26, 2023

Should I abandon this, or is this still a worthwhile addition? There's some positive code changes but not many tests are affected so not sure it's worth it. OTOH the code's already done so it doesn't hurt to just land this too. No strong opinion.

Is it improving in cross block cases? Add a comment that it should be removable after globalisel?

@jayfoad
Copy link
Contributor

jayfoad commented Oct 26, 2023

Should I abandon this, or is this still a worthwhile addition? There's some positive code changes but not many tests are affected so not sure it's worth it. OTOH the code's already done so it doesn't hurt to just land this too. No strong opinion.

Is it improving in cross block cases? Add a comment that it should be removable after globalisel?

My preference is still not to add this complexity to SIFoldOperands. Are there still "positive code changes" if you rebase this on #69703?

@Pierre-vh
Copy link
Contributor Author

I made a couple of changes:

  • Now it folds into a S_CSELECT directly, I think it leads to better codegen.
  • Only allow S_CSELECT -1, 0 because we want all lanes active for the V_CNDMASK. We don't know what EXEC looks like so if any lane is disabled and it's the only active lane, this falls apart.

Should I abandon this, or is this still a worthwhile addition? There's some positive code changes but not many tests are affected so not sure it's worth it. OTOH the code's already done so it doesn't hurt to just land this too. No strong opinion.

Is it improving in cross block cases? Add a comment that it should be removable after globalisel?

My preference is still not to add this complexity to SIFoldOperands. Are there still "positive code changes" if you rebase this on #69703?

The affected tests don't overlap. I think with this SIFoldOperand transform, we can deal with the i32 ext cases too (and the other patch is for i64)

Comment on lines +1405 to +1406
// s_cselect s[2:3], -1, 0
// v_cndmask v0, 0, +-1, s[2:3]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pattern is a uniform select. Why would we have selected v_cndmask instead of s_cselect in the first place?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we just have some other patterns like in the ticket that always emit a cndmask, even with a cselect would do the trick. I'll have a closer look.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's the i32 zext pattern, similar to the i64 zext pattern we fixed.
Last I tried changing it though there were a lot of test changes and not all of them were good. I can try to look again if this patch is not going to land.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still feel like this patch is just trying to work around poor instruction selection. Can you put up a patch for changing the i32 zext selection pattern, so we can look at that?

@arsenm
Copy link
Contributor

arsenm commented Nov 14, 2023

Is this still relevant or did #69703 fix all the cases?

@Pierre-vh
Copy link
Contributor Author

It fixed the cases that the ticket wanted, this fixes other cases but it's too much effort for little benefit I think. I can always come back to this later if needed

@Pierre-vh Pierre-vh closed this Nov 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants