Skip to content

Conversation

@linuxrocks123
Copy link
Contributor

This PR contains a peephole transformation to change this pattern:

s_cselect_b64 s[0:1], X!=0, 0
v_cndmask_b32_e64 v0, Z, Y, s[0:1]
v_readfirstlane_b32 s0, v0

To:

s_cselect_b32 s0, Y, Z

@llvmbot
Copy link
Member

llvmbot commented Nov 12, 2025

@llvm/pr-subscribers-backend-amdgpu

Author: Patrick Simmons (linuxrocks123)

Changes

This PR contains a peephole transformation to change this pattern:

s_cselect_b64 s[0:1], X!=0, 0
v_cndmask_b32_e64 v0, Z, Y, s[0:1]
v_readfirstlane_b32 s0, v0

To:

s_cselect_b32 s0, Y, Z


Full diff: https://github.com/llvm/llvm-project/pull/167780.diff

1 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/SIPeepholeSDWA.cpp (+45)
diff --git a/llvm/lib/Target/AMDGPU/SIPeepholeSDWA.cpp b/llvm/lib/Target/AMDGPU/SIPeepholeSDWA.cpp
index 86ca22cfeffd8..c427dd03a8529 100644
--- a/llvm/lib/Target/AMDGPU/SIPeepholeSDWA.cpp
+++ b/llvm/lib/Target/AMDGPU/SIPeepholeSDWA.cpp
@@ -24,6 +24,7 @@
 #include "GCNSubtarget.h"
 #include "MCTargetDesc/AMDGPUMCTargetDesc.h"
 #include "llvm/ADT/MapVector.h"
+#include "llvm/ADT/STLExtras.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include <optional>
@@ -66,6 +67,7 @@ class SIPeepholeSDWA {
   MachineInstr *createSDWAVersion(MachineInstr &MI);
   bool convertToSDWA(MachineInstr &MI, const SDWAOperandsVector &SDWAOperands);
   void legalizeScalarOperands(MachineInstr &MI, const GCNSubtarget &ST) const;
+  bool strengthReduceCSelect64(MachineFunction &MF);
 
 public:
   bool run(MachineFunction &MF);
@@ -1362,6 +1364,46 @@ void SIPeepholeSDWA::legalizeScalarOperands(MachineInstr &MI,
   }
 }
 
+bool SIPeepholeSDWA::strengthReduceCSelect64(MachineFunction &MF) {
+  bool Changed = false;
+
+  for (MachineBasicBlock &MBB : MF)
+    for (MachineInstr &MI : make_early_inc_range(MBB)) {
+      if (MI.getOpcode() != AMDGPU::S_CSELECT_B64 ||
+          !MI.getOperand(1).isImm() || !MI.getOperand(2).isImm() ||
+          (MI.getOperand(1).getImm() != 0 && MI.getOperand(2).getImm() != 0))
+        continue;
+
+      Register Reg = MI.getOperand(0).getReg();
+      MachineInstr *MustBeVCNDMASK = MRI->getOneNonDBGUser(Reg);
+      if (!MustBeVCNDMASK ||
+          MustBeVCNDMASK->getOpcode() != AMDGPU::V_CNDMASK_B32_e64 ||
+          !MustBeVCNDMASK->getOperand(1).isImm() ||
+          !MustBeVCNDMASK->getOperand(2).isImm())
+        continue;
+
+      MachineInstr *MustBeVREADFIRSTLANE =
+          MRI->getOneNonDBGUser(MustBeVCNDMASK->getOperand(0).getReg());
+      if (!MustBeVREADFIRSTLANE ||
+          MustBeVREADFIRSTLANE->getOpcode() != AMDGPU::V_READFIRSTLANE_B32)
+        continue;
+
+      unsigned CSelectZeroOpIdx = MI.getOperand(1).getImm() ? 2 : 1;
+
+      BuildMI(MBB, MI, MI.getDebugLoc(), TII->get(AMDGPU::S_CSELECT_B32),
+              MustBeVREADFIRSTLANE->getOperand(0).getReg())
+          .addImm(MustBeVCNDMASK->getOperand(CSelectZeroOpIdx + 2).getImm())
+          .addImm(
+              MustBeVCNDMASK->getOperand((CSelectZeroOpIdx == 1 ? 2 : 1) + 2)
+                  .getImm())
+          .addReg(AMDGPU::SCC, RegState::Implicit);
+
+      MustBeVREADFIRSTLANE->eraseFromParent();
+    }
+
+  return Changed;
+}
+
 bool SIPeepholeSDWALegacy::runOnMachineFunction(MachineFunction &MF) {
   if (skipFunction(MF.getFunction()))
     return false;
@@ -1436,6 +1478,9 @@ bool SIPeepholeSDWA::run(MachineFunction &MF) {
     } while (Changed);
   }
 
+  // Other target-specific SSA-form peephole optimizations
+  Ret |= strengthReduceCSelect64(MF);
+
   return Ret;
 }
 

@linuxrocks123
Copy link
Contributor Author

Requesting review from @LU-JOHN.

@linuxrocks123
Copy link
Contributor Author

@LU-JOHN right now Y and Z must be immediates. I think that's necessary since otherwise you'd need to read out of a vector register in s_cselect_b32, and you can't.

Copy link
Contributor

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing test, and the strategy is probably wrong. This has nothing to do with SDWA, and is hacking around DAG SALU/VALU hackery. The solution would be upstream of this pass

@@ -1362,6 +1364,46 @@ void SIPeepholeSDWA::legalizeScalarOperands(MachineInstr &MI,
}
}

bool SIPeepholeSDWA::strengthReduceCSelect64(MachineFunction &MF) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has nothing to do with SDWA and doesn't belong here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arsenm our existing general peephole pass runs after register allocation, which is too late. This pass is running at about the right time. I could rename this pass or create a new pre-RA peephole pass. Which would you prefer?

@linuxrocks123
Copy link
Contributor Author

Missing test, and the strategy is probably wrong. This has nothing to do with SDWA, and is hacking around DAG SALU/VALU hackery. The solution would be upstream of this pass

Existing tests cover this (but need to have their CHECK lines updated). Although this is changing SelectionDAG's instruction selection, I think it would probably be easier to change this with a peephole pass. The v_readfirstlane stuff is generated very early in the SelectionDAG, and presumably needs to be there most of the time, so I didn't see an easy way to remove it at that level.

I could have missed it -- SelectionDAG's architecture is confusing to me -- so, if you see an easy way to fix this there, please let me know what function to look at. If not, this peephole transformation will work, too.

Copy link
Contributor

@jayfoad jayfoad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s_cselect_b64 s[0:1], X!=0, 0
v_cndmask_b32_e64 v0, Z, Y, s[0:1]
v_readfirstlane_b32 s0, v0

To:

s_cselect_b32 s0, Y, Z

That's not safe unless you know that the bit in X corresponding to the first active lane is 1. Much easier to only do this if X==-1.

@linuxrocks123
Copy link
Contributor Author

@jayfoad thanks for catching this. Question: is it possible to find the first active lane at compile time to ignore unnecessary conservatism, or "no because that's going to vary based on the thread executing this function so we can't know at compile time"?

Copy link
Contributor

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs tests. An IR example would help figure out where this should be addressed, I think this is an issue upstream of any of the mid-pipeline peepholes

@jayfoad
Copy link
Contributor

jayfoad commented Nov 15, 2025

Question: is it possible to find the first active lane at compile time to ignore unnecessary conservatism, or "no because that's going to vary based on the thread executing this function so we can't know at compile time"?

In general it is not possible. (There might be some special cases where the compiler could do it, but so far I don't think the backend tries to do anything like that.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants