Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AMDGPU] RA inserted scalar instructions can be at the BB top #72140

Merged
merged 3 commits into from
Nov 16, 2023

Conversation

cdevadas
Copy link
Collaborator

We adjust the insertion point at the BB top for spills/copies during RA to ensure they are placed after the exec restore instructions required for the divergent control flow execution. This is, however, required only for the vector operations. The insertions for scalar registers can still go at the BB top.

@llvmbot
Copy link
Collaborator

llvmbot commented Nov 13, 2023

@llvm/pr-subscribers-llvm-regalloc

Author: Christudasan Devadasan (cdevadas)

Changes

We adjust the insertion point at the BB top for spills/copies during RA to ensure they are placed after the exec restore instructions required for the divergent control flow execution. This is, however, required only for the vector operations. The insertions for scalar registers can still go at the BB top.


Full diff: https://github.com/llvm/llvm-project/pull/72140.diff

12 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/MachineBasicBlock.h (+4-2)
  • (modified) llvm/include/llvm/CodeGen/TargetInstrInfo.h (+2-1)
  • (modified) llvm/lib/CodeGen/FixupStatepointCallerSaved.cpp (+2-1)
  • (modified) llvm/lib/CodeGen/InlineSpiller.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/MachineBasicBlock.cpp (+3-3)
  • (modified) llvm/lib/CodeGen/MachineSink.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/RegAllocFast.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/SplitKit.cpp (+4-2)
  • (modified) llvm/lib/Target/AMDGPU/SIFixSGPRCopies.cpp (+2-2)
  • (modified) llvm/lib/Target/AMDGPU/SIInstrInfo.cpp (+11-3)
  • (modified) llvm/lib/Target/AMDGPU/SIInstrInfo.h (+2-1)
  • (modified) llvm/test/CodeGen/AMDGPU/identical-subrange-spill-infloop.ll (+60-59)
diff --git a/llvm/include/llvm/CodeGen/MachineBasicBlock.h b/llvm/include/llvm/CodeGen/MachineBasicBlock.h
index 4b5336fac33ea46..2caefe80f352791 100644
--- a/llvm/include/llvm/CodeGen/MachineBasicBlock.h
+++ b/llvm/include/llvm/CodeGen/MachineBasicBlock.h
@@ -846,8 +846,10 @@ class MachineBasicBlock
 
   /// Return the first instruction in MBB after I that is not a PHI, label or
   /// debug.  This is the correct point to insert copies at the beginning of a
-  /// basic block.
-  iterator SkipPHIsLabelsAndDebug(iterator I, bool SkipPseudoOp = true);
+  /// basic block. The optional arg \p Reg would allow targets to apply some
+  /// additional checks while inserting copies/spills during RA.
+  iterator SkipPHIsLabelsAndDebug(iterator I, Register Reg = Register(),
+                                  bool SkipPseudoOp = true);
 
   /// Returns an iterator to the first terminator instruction of this basic
   /// block. If a terminator does not exist, it returns end().
diff --git a/llvm/include/llvm/CodeGen/TargetInstrInfo.h b/llvm/include/llvm/CodeGen/TargetInstrInfo.h
index 8e7499ac626a747..0487fd644d0ddf3 100644
--- a/llvm/include/llvm/CodeGen/TargetInstrInfo.h
+++ b/llvm/include/llvm/CodeGen/TargetInstrInfo.h
@@ -1989,7 +1989,8 @@ class TargetInstrInfo : public MCInstrInfo {
   /// True if the instruction is bound to the top of its basic block and no
   /// other instructions shall be inserted before it. This can be implemented
   /// to prevent register allocator to insert spills before such instructions.
-  virtual bool isBasicBlockPrologue(const MachineInstr &MI) const {
+  virtual bool isBasicBlockPrologue(const MachineInstr &MI,
+                                    Register Reg) const {
     return false;
   }
 
diff --git a/llvm/lib/CodeGen/FixupStatepointCallerSaved.cpp b/llvm/lib/CodeGen/FixupStatepointCallerSaved.cpp
index 75504ef32250c52..4d668c53f7156b8 100644
--- a/llvm/lib/CodeGen/FixupStatepointCallerSaved.cpp
+++ b/llvm/lib/CodeGen/FixupStatepointCallerSaved.cpp
@@ -461,7 +461,8 @@ class StatepointState {
 
       if (EHPad && !RC.hasReload(Reg, RegToSlotIdx[Reg], EHPad)) {
         RC.recordReload(Reg, RegToSlotIdx[Reg], EHPad);
-        auto EHPadInsertPoint = EHPad->SkipPHIsLabelsAndDebug(EHPad->begin());
+        auto EHPadInsertPoint =
+            EHPad->SkipPHIsLabelsAndDebug(EHPad->begin(), Reg);
         insertReloadBefore(Reg, EHPadInsertPoint, EHPad);
         LLVM_DEBUG(dbgs() << "...also reload at EHPad "
                           << printMBBReference(*EHPad) << "\n");
diff --git a/llvm/lib/CodeGen/InlineSpiller.cpp b/llvm/lib/CodeGen/InlineSpiller.cpp
index 71d58b2e9e18d7d..2740265f75340b5 100644
--- a/llvm/lib/CodeGen/InlineSpiller.cpp
+++ b/llvm/lib/CodeGen/InlineSpiller.cpp
@@ -469,7 +469,7 @@ bool InlineSpiller::hoistSpillInsideBB(LiveInterval &SpillLI,
   MachineBasicBlock *MBB = LIS.getMBBFromIndex(SrcVNI->def);
   MachineBasicBlock::iterator MII;
   if (SrcVNI->isPHIDef())
-    MII = MBB->SkipPHIsLabelsAndDebug(MBB->begin());
+    MII = MBB->SkipPHIsLabelsAndDebug(MBB->begin(), SrcReg);
   else {
     MachineInstr *DefMI = LIS.getInstructionFromIndex(SrcVNI->def);
     assert(DefMI && "Defining instruction disappeared");
diff --git a/llvm/lib/CodeGen/MachineBasicBlock.cpp b/llvm/lib/CodeGen/MachineBasicBlock.cpp
index d9e22685faf5f5e..0c92d5d910dc14e 100644
--- a/llvm/lib/CodeGen/MachineBasicBlock.cpp
+++ b/llvm/lib/CodeGen/MachineBasicBlock.cpp
@@ -212,7 +212,7 @@ MachineBasicBlock::SkipPHIsAndLabels(MachineBasicBlock::iterator I) {
 
   iterator E = end();
   while (I != E && (I->isPHI() || I->isPosition() ||
-                    TII->isBasicBlockPrologue(*I)))
+                    TII->isBasicBlockPrologue(*I, Register())))
     ++I;
   // FIXME: This needs to change if we wish to bundle labels
   // inside the bundle.
@@ -223,13 +223,13 @@ MachineBasicBlock::SkipPHIsAndLabels(MachineBasicBlock::iterator I) {
 
 MachineBasicBlock::iterator
 MachineBasicBlock::SkipPHIsLabelsAndDebug(MachineBasicBlock::iterator I,
-                                          bool SkipPseudoOp) {
+                                          Register Reg, bool SkipPseudoOp) {
   const TargetInstrInfo *TII = getParent()->getSubtarget().getInstrInfo();
 
   iterator E = end();
   while (I != E && (I->isPHI() || I->isPosition() || I->isDebugInstr() ||
                     (SkipPseudoOp && I->isPseudoProbe()) ||
-                    TII->isBasicBlockPrologue(*I)))
+                    TII->isBasicBlockPrologue(*I, Reg)))
     ++I;
   // FIXME: This needs to change if we wish to bundle labels / dbg_values
   // inside the bundle.
diff --git a/llvm/lib/CodeGen/MachineSink.cpp b/llvm/lib/CodeGen/MachineSink.cpp
index 9ed6b22eae972cc..1da6ccccbd540c4 100644
--- a/llvm/lib/CodeGen/MachineSink.cpp
+++ b/llvm/lib/CodeGen/MachineSink.cpp
@@ -291,7 +291,7 @@ static bool blockPrologueInterferes(const MachineBasicBlock *BB,
   for (MachineBasicBlock::const_iterator PI = BB->getFirstNonPHI(); PI != End;
        ++PI) {
     // Only check target defined prologue instructions
-    if (!TII->isBasicBlockPrologue(*PI))
+    if (!TII->isBasicBlockPrologue(*PI, Register()))
       continue;
     for (auto &MO : MI.operands()) {
       if (!MO.isReg())
diff --git a/llvm/lib/CodeGen/RegAllocFast.cpp b/llvm/lib/CodeGen/RegAllocFast.cpp
index 9f3c17e2799ceae..03f8c26ad2d0f8f 100644
--- a/llvm/lib/CodeGen/RegAllocFast.cpp
+++ b/llvm/lib/CodeGen/RegAllocFast.cpp
@@ -514,7 +514,7 @@ RegAllocFast::getMBBBeginInsertionPoint(
     }
 
     // Most reloads should be inserted after prolog instructions.
-    if (!TII->isBasicBlockPrologue(*I))
+    if (!TII->isBasicBlockPrologue(*I, Register()))
       break;
 
     // However if a prolog instruction reads a register that needs to be
diff --git a/llvm/lib/CodeGen/SplitKit.cpp b/llvm/lib/CodeGen/SplitKit.cpp
index 1664c304f643c3f..b1c862210932bc3 100644
--- a/llvm/lib/CodeGen/SplitKit.cpp
+++ b/llvm/lib/CodeGen/SplitKit.cpp
@@ -795,8 +795,10 @@ SlotIndex SplitEditor::leaveIntvAtTop(MachineBasicBlock &MBB) {
     return Start;
   }
 
-  VNInfo *VNI = defFromParent(0, ParentVNI, Start, MBB,
-                              MBB.SkipPHIsLabelsAndDebug(MBB.begin()));
+  unsigned RegIdx = 0;
+  Register Reg = LIS.getInterval(Edit->get(RegIdx)).reg();
+  VNInfo *VNI = defFromParent(RegIdx, ParentVNI, Start, MBB,
+                              MBB.SkipPHIsLabelsAndDebug(MBB.begin(), Reg));
   RegAssign.insert(Start, VNI->def, OpenIdx);
   LLVM_DEBUG(dump());
   return VNI->def;
diff --git a/llvm/lib/Target/AMDGPU/SIFixSGPRCopies.cpp b/llvm/lib/Target/AMDGPU/SIFixSGPRCopies.cpp
index 86980ee851bb1fc..1cb0e96e0e22a2a 100644
--- a/llvm/lib/Target/AMDGPU/SIFixSGPRCopies.cpp
+++ b/llvm/lib/Target/AMDGPU/SIFixSGPRCopies.cpp
@@ -424,7 +424,7 @@ static bool isReachable(const MachineInstr *From,
 static MachineBasicBlock::iterator
 getFirstNonPrologue(MachineBasicBlock *MBB, const TargetInstrInfo *TII) {
   MachineBasicBlock::iterator I = MBB->getFirstNonPHI();
-  while (I != MBB->end() && TII->isBasicBlockPrologue(*I))
+  while (I != MBB->end() && TII->isBasicBlockPrologue(*I, Register()))
     ++I;
 
   return I;
@@ -578,7 +578,7 @@ static bool hoistAndMergeSGPRInits(unsigned Reg,
       MachineBasicBlock::reverse_iterator B(BoundaryMI);
       // Check if B should actually be a boundary. If not set the previous
       // instruction as the boundary instead.
-      if (!TII->isBasicBlockPrologue(*B))
+      if (!TII->isBasicBlockPrologue(*B, Register()))
         B++;
 
       auto R = std::next(MI->getReverseIterator());
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index 027b695c3bb1a74..09682a387b45a86 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -8474,16 +8474,24 @@ unsigned SIInstrInfo::getLiveRangeSplitOpcode(Register SrcReg,
   return AMDGPU::COPY;
 }
 
-bool SIInstrInfo::isBasicBlockPrologue(const MachineInstr &MI) const {
+bool SIInstrInfo::isBasicBlockPrologue(const MachineInstr &MI,
+                                       Register Reg) const {
   // We need to handle instructions which may be inserted during register
   // allocation to handle the prolog. The initial prolog instruction may have
   // been separated from the start of the block by spills and copies inserted
-  // needed by the prolog.
+  // needed by the prolog. However, the insertions for scalar registers can
+  // always be placed at the BB top as they are independent of the exec mask
+  // value.
   uint16_t Opc = MI.getOpcode();
+  const MachineFunction *MF = MI.getParent()->getParent();
+  const MachineRegisterInfo &MRI = MF->getRegInfo();
+  bool IsNullOrVectorRegister =
+      !Reg || !RI.isSGPRClass(RI.getRegClassForReg(MRI, Reg));
 
   // FIXME: Copies inserted in the block prolog for live-range split should also
   // be included.
-  return (isSpillOpcode(Opc) || (!MI.isTerminator() && Opc != AMDGPU::COPY &&
+  return IsNullOrVectorRegister &&
+         (isSpillOpcode(Opc) || (!MI.isTerminator() && Opc != AMDGPU::COPY &&
                                  MI.modifiesRegister(AMDGPU::EXEC, &RI)));
 }
 
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.h b/llvm/lib/Target/AMDGPU/SIInstrInfo.h
index 29f549fc29a3ce6..6834a661196ed75 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.h
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.h
@@ -1179,7 +1179,8 @@ class SIInstrInfo final : public AMDGPUGenInstrInfo {
   unsigned getLiveRangeSplitOpcode(Register Reg,
                                    const MachineFunction &MF) const override;
 
-  bool isBasicBlockPrologue(const MachineInstr &MI) const override;
+  bool isBasicBlockPrologue(const MachineInstr &MI,
+                            Register Reg) const override;
 
   MachineInstr *createPHIDestinationCopy(MachineBasicBlock &MBB,
                                          MachineBasicBlock::iterator InsPt,
diff --git a/llvm/test/CodeGen/AMDGPU/identical-subrange-spill-infloop.ll b/llvm/test/CodeGen/AMDGPU/identical-subrange-spill-infloop.ll
index 8098304d134229d..ffbf00765adbe22 100644
--- a/llvm/test/CodeGen/AMDGPU/identical-subrange-spill-infloop.ll
+++ b/llvm/test/CodeGen/AMDGPU/identical-subrange-spill-infloop.ll
@@ -168,7 +168,6 @@ define void @main(i1 %arg) #0 {
 ; CHECK-NEXT:    s_mov_b64 vcc, vcc
 ; CHECK-NEXT:    s_cbranch_vccnz .LBB0_2
 ; CHECK-NEXT:  .LBB0_3: ; %Flow14
-; CHECK-NEXT:    s_or_saveexec_b64 s[20:21], s[26:27]
 ; CHECK-NEXT:    s_waitcnt lgkmcnt(0)
 ; CHECK-NEXT:    v_readlane_b32 s12, v5, 32
 ; CHECK-NEXT:    v_readlane_b32 s13, v5, 33
@@ -178,39 +177,39 @@ define void @main(i1 %arg) #0 {
 ; CHECK-NEXT:    v_readlane_b32 s17, v5, 37
 ; CHECK-NEXT:    v_readlane_b32 s18, v5, 38
 ; CHECK-NEXT:    v_readlane_b32 s19, v5, 39
-; CHECK-NEXT:    v_writelane_b32 v5, s4, 56
-; CHECK-NEXT:    v_writelane_b32 v5, s5, 57
-; CHECK-NEXT:    v_writelane_b32 v5, s6, 58
-; CHECK-NEXT:    v_writelane_b32 v5, s7, 59
-; CHECK-NEXT:    v_writelane_b32 v5, s8, 60
-; CHECK-NEXT:    v_writelane_b32 v5, s9, 61
-; CHECK-NEXT:    v_writelane_b32 v5, s10, 62
-; CHECK-NEXT:    v_writelane_b32 v5, s11, 63
-; CHECK-NEXT:    v_writelane_b32 v5, s52, 40
-; CHECK-NEXT:    v_writelane_b32 v5, s53, 41
-; CHECK-NEXT:    v_writelane_b32 v5, s54, 42
-; CHECK-NEXT:    v_writelane_b32 v5, s55, 43
-; CHECK-NEXT:    v_writelane_b32 v5, s56, 44
-; CHECK-NEXT:    v_writelane_b32 v5, s57, 45
-; CHECK-NEXT:    v_writelane_b32 v5, s58, 46
-; CHECK-NEXT:    v_writelane_b32 v5, s59, 47
-; CHECK-NEXT:    v_writelane_b32 v4, s12, 0
-; CHECK-NEXT:    v_writelane_b32 v5, s60, 48
-; CHECK-NEXT:    v_writelane_b32 v4, s13, 1
-; CHECK-NEXT:    v_writelane_b32 v5, s61, 49
-; CHECK-NEXT:    v_writelane_b32 v4, s14, 2
-; CHECK-NEXT:    v_writelane_b32 v5, s62, 50
-; CHECK-NEXT:    v_writelane_b32 v4, s15, 3
-; CHECK-NEXT:    v_writelane_b32 v5, s63, 51
-; CHECK-NEXT:    v_writelane_b32 v4, s16, 4
-; CHECK-NEXT:    v_writelane_b32 v5, s64, 52
-; CHECK-NEXT:    v_writelane_b32 v4, s17, 5
-; CHECK-NEXT:    v_writelane_b32 v5, s65, 53
-; CHECK-NEXT:    v_writelane_b32 v4, s18, 6
-; CHECK-NEXT:    v_writelane_b32 v5, s66, 54
-; CHECK-NEXT:    v_writelane_b32 v4, s19, 7
-; CHECK-NEXT:    v_writelane_b32 v5, s67, 55
-; CHECK-NEXT:    s_xor_b64 exec, exec, s[20:21]
+; CHECK-NEXT:    v_writelane_b32 v5, s4, 40
+; CHECK-NEXT:    v_writelane_b32 v5, s5, 41
+; CHECK-NEXT:    v_writelane_b32 v5, s6, 42
+; CHECK-NEXT:    v_writelane_b32 v5, s7, 43
+; CHECK-NEXT:    v_writelane_b32 v5, s8, 44
+; CHECK-NEXT:    v_writelane_b32 v5, s9, 45
+; CHECK-NEXT:    v_writelane_b32 v5, s10, 46
+; CHECK-NEXT:    v_writelane_b32 v5, s11, 47
+; CHECK-NEXT:    v_writelane_b32 v5, s12, 48
+; CHECK-NEXT:    v_writelane_b32 v5, s13, 49
+; CHECK-NEXT:    v_writelane_b32 v5, s14, 50
+; CHECK-NEXT:    v_writelane_b32 v5, s15, 51
+; CHECK-NEXT:    v_writelane_b32 v5, s16, 52
+; CHECK-NEXT:    v_writelane_b32 v5, s17, 53
+; CHECK-NEXT:    v_writelane_b32 v5, s18, 54
+; CHECK-NEXT:    v_writelane_b32 v5, s19, 55
+; CHECK-NEXT:    v_writelane_b32 v5, s52, 56
+; CHECK-NEXT:    v_writelane_b32 v4, s60, 0
+; CHECK-NEXT:    v_writelane_b32 v5, s53, 57
+; CHECK-NEXT:    v_writelane_b32 v4, s61, 1
+; CHECK-NEXT:    v_writelane_b32 v5, s54, 58
+; CHECK-NEXT:    v_writelane_b32 v4, s62, 2
+; CHECK-NEXT:    v_writelane_b32 v5, s55, 59
+; CHECK-NEXT:    v_writelane_b32 v4, s63, 3
+; CHECK-NEXT:    v_writelane_b32 v5, s56, 60
+; CHECK-NEXT:    v_writelane_b32 v4, s64, 4
+; CHECK-NEXT:    v_writelane_b32 v5, s57, 61
+; CHECK-NEXT:    v_writelane_b32 v4, s65, 5
+; CHECK-NEXT:    v_writelane_b32 v5, s58, 62
+; CHECK-NEXT:    v_writelane_b32 v4, s66, 6
+; CHECK-NEXT:    v_writelane_b32 v5, s59, 63
+; CHECK-NEXT:    v_writelane_b32 v4, s67, 7
+; CHECK-NEXT:    s_andn2_saveexec_b64 s[20:21], s[26:27]
 ; CHECK-NEXT:    s_cbranch_execz .LBB0_10
 ; CHECK-NEXT:  ; %bb.4: ; %bb32
 ; CHECK-NEXT:    s_and_saveexec_b64 s[8:9], s[24:25]
@@ -265,39 +264,35 @@ define void @main(i1 %arg) #0 {
 ; CHECK-NEXT:    s_waitcnt vmcnt(1)
 ; CHECK-NEXT:    buffer_store_dwordx4 v[2:5], off, s[8:11], 0
 ; CHECK-NEXT:  .LBB0_6: ; %Flow12
-; CHECK-NEXT:    s_andn2_saveexec_b64 s[4:5], s[22:23]
+; CHECK-NEXT:    s_or_saveexec_b64 s[4:5], s[22:23]
+; CHECK-NEXT:    v_readlane_b32 s52, v5, 40
+; CHECK-NEXT:    v_readlane_b32 s53, v5, 41
+; CHECK-NEXT:    v_readlane_b32 s54, v5, 42
+; CHECK-NEXT:    v_readlane_b32 s55, v5, 43
+; CHECK-NEXT:    v_readlane_b32 s56, v5, 44
+; CHECK-NEXT:    v_readlane_b32 s57, v5, 45
+; CHECK-NEXT:    v_readlane_b32 s58, v5, 46
+; CHECK-NEXT:    v_readlane_b32 s59, v5, 47
+; CHECK-NEXT:    v_readlane_b32 s60, v5, 48
+; CHECK-NEXT:    v_readlane_b32 s61, v5, 49
+; CHECK-NEXT:    v_readlane_b32 s62, v5, 50
+; CHECK-NEXT:    v_readlane_b32 s63, v5, 51
+; CHECK-NEXT:    v_readlane_b32 s64, v5, 52
+; CHECK-NEXT:    v_readlane_b32 s65, v5, 53
+; CHECK-NEXT:    v_readlane_b32 s66, v5, 54
+; CHECK-NEXT:    v_readlane_b32 s67, v5, 55
+; CHECK-NEXT:    s_xor_b64 exec, exec, s[4:5]
 ; CHECK-NEXT:    s_cbranch_execz .LBB0_9
 ; CHECK-NEXT:  ; %bb.7: ; %bb33.preheader
 ; CHECK-NEXT:    s_mov_b32 s8, 0
 ; CHECK-NEXT:    s_mov_b32 s6, s8
-; CHECK-NEXT:    v_readlane_b32 s36, v5, 40
 ; CHECK-NEXT:    s_mov_b32 s7, s8
 ; CHECK-NEXT:    v_mov_b32_e32 v2, s6
-; CHECK-NEXT:    v_readlane_b32 s37, v5, 41
+; CHECK-NEXT:    v_readlane_b32 s36, v5, 56
 ; CHECK-NEXT:    s_mov_b32 s9, s8
 ; CHECK-NEXT:    s_mov_b32 s10, s8
 ; CHECK-NEXT:    s_mov_b32 s11, s8
 ; CHECK-NEXT:    v_mov_b32_e32 v3, s7
-; CHECK-NEXT:    v_readlane_b32 s38, v5, 42
-; CHECK-NEXT:    v_readlane_b32 s39, v5, 43
-; CHECK-NEXT:    v_readlane_b32 s40, v5, 44
-; CHECK-NEXT:    v_readlane_b32 s41, v5, 45
-; CHECK-NEXT:    v_readlane_b32 s42, v5, 46
-; CHECK-NEXT:    v_readlane_b32 s43, v5, 47
-; CHECK-NEXT:    v_readlane_b32 s44, v5, 48
-; CHECK-NEXT:    v_readlane_b32 s45, v5, 49
-; CHECK-NEXT:    v_readlane_b32 s46, v5, 50
-; CHECK-NEXT:    v_readlane_b32 s47, v5, 51
-; CHECK-NEXT:    v_readlane_b32 s48, v5, 52
-; CHECK-NEXT:    v_readlane_b32 s49, v5, 53
-; CHECK-NEXT:    v_readlane_b32 s50, v5, 54
-; CHECK-NEXT:    v_readlane_b32 s51, v5, 55
-; CHECK-NEXT:    s_mov_b64 s[12:13], s[36:37]
-; CHECK-NEXT:    s_mov_b64 s[14:15], s[38:39]
-; CHECK-NEXT:    s_mov_b64 s[16:17], s[40:41]
-; CHECK-NEXT:    s_mov_b64 s[18:19], s[42:43]
-; CHECK-NEXT:    image_sample_lz v6, v[2:3], s[36:43], s[8:11] dmask:0x1
-; CHECK-NEXT:    v_readlane_b32 s36, v5, 56
 ; CHECK-NEXT:    v_readlane_b32 s37, v5, 57
 ; CHECK-NEXT:    v_readlane_b32 s38, v5, 58
 ; CHECK-NEXT:    v_readlane_b32 s39, v5, 59
@@ -305,19 +300,25 @@ define void @main(i1 %arg) #0 {
 ; CHECK-NEXT:    v_readlane_b32 s41, v5, 61
 ; CHECK-NEXT:    v_readlane_b32 s42, v5, 62
 ; CHECK-NEXT:    v_readlane_b32 s43, v5, 63
+; CHECK-NEXT:    s_nop 4
+; CHECK-NEXT:    image_sample_lz v6, v[2:3], s[36:43], s[8:11] dmask:0x1
+; CHECK-NEXT:    image_sample_lz v7, v[2:3], s[52:59], s[8:11] dmask:0x1
 ; CHECK-NEXT:    ; kill: killed $vgpr2_vgpr3
+; CHECK-NEXT:    s_mov_b64 s[12:13], s[36:37]
 ; CHECK-NEXT:    s_and_b64 vcc, exec, 0
 ; CHECK-NEXT:    v_readlane_b32 s44, v4, 0
 ; CHECK-NEXT:    v_readlane_b32 s45, v4, 1
 ; CHECK-NEXT:    v_readlane_b32 s46, v4, 2
 ; CHECK-NEXT:    v_readlane_b32 s47, v4, 3
-; CHECK-NEXT:    image_sample_lz v7, v[2:3], s[36:43], s[8:11] dmask:0x1
 ; CHECK-NEXT:    v_readlane_b32 s48, v4, 4
 ; CHECK-NEXT:    v_readlane_b32 s49, v4, 5
 ; CHECK-NEXT:    v_readlane_b32 s50, v4, 6
 ; CHECK-NEXT:    v_readlane_b32 s51, v4, 7
+; CHECK-NEXT:    s_mov_b64 s[14:15], s[38:39]
+; CHECK-NEXT:    s_mov_b64 s[16:17], s[40:41]
+; CHECK-NEXT:    s_mov_b64 s[18:19], s[42:43]
 ; CHECK-NEXT:    ; kill: killed $sgpr12_sgpr13_sgpr14_sgpr15_sgpr16_sgpr17_sgpr18_sgpr19
-; CHECK-NEXT:    ; kill: killed $sgpr36_sgpr37_sgpr38_sgpr39_sgpr40_sgpr41_sgpr42_sgpr43
+; CHECK-NEXT:    ; kill: killed $sgpr52_sgpr53_sgpr54_sgpr55_sgpr56_sgpr57_sgpr58_sgpr59
 ; CHECK-NEXT:    ; kill: killed $sgpr8_sgpr9_sgpr10 killed $sgpr11
 ; CHECK-NEXT:    s_waitcnt vmcnt(0)
 ; CHECK-NEXT:    v_sub_f32_e32 v2, v7, v6

@llvmbot
Copy link
Collaborator

llvmbot commented Nov 13, 2023

@llvm/pr-subscribers-backend-amdgpu

Author: Christudasan Devadasan (cdevadas)

Changes

We adjust the insertion point at the BB top for spills/copies during RA to ensure they are placed after the exec restore instructions required for the divergent control flow execution. This is, however, required only for the vector operations. The insertions for scalar registers can still go at the BB top.


Full diff: https://github.com/llvm/llvm-project/pull/72140.diff

12 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/MachineBasicBlock.h (+4-2)
  • (modified) llvm/include/llvm/CodeGen/TargetInstrInfo.h (+2-1)
  • (modified) llvm/lib/CodeGen/FixupStatepointCallerSaved.cpp (+2-1)
  • (modified) llvm/lib/CodeGen/InlineSpiller.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/MachineBasicBlock.cpp (+3-3)
  • (modified) llvm/lib/CodeGen/MachineSink.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/RegAllocFast.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/SplitKit.cpp (+4-2)
  • (modified) llvm/lib/Target/AMDGPU/SIFixSGPRCopies.cpp (+2-2)
  • (modified) llvm/lib/Target/AMDGPU/SIInstrInfo.cpp (+11-3)
  • (modified) llvm/lib/Target/AMDGPU/SIInstrInfo.h (+2-1)
  • (modified) llvm/test/CodeGen/AMDGPU/identical-subrange-spill-infloop.ll (+60-59)
diff --git a/llvm/include/llvm/CodeGen/MachineBasicBlock.h b/llvm/include/llvm/CodeGen/MachineBasicBlock.h
index 4b5336fac33ea46..2caefe80f352791 100644
--- a/llvm/include/llvm/CodeGen/MachineBasicBlock.h
+++ b/llvm/include/llvm/CodeGen/MachineBasicBlock.h
@@ -846,8 +846,10 @@ class MachineBasicBlock
 
   /// Return the first instruction in MBB after I that is not a PHI, label or
   /// debug.  This is the correct point to insert copies at the beginning of a
-  /// basic block.
-  iterator SkipPHIsLabelsAndDebug(iterator I, bool SkipPseudoOp = true);
+  /// basic block. The optional arg \p Reg would allow targets to apply some
+  /// additional checks while inserting copies/spills during RA.
+  iterator SkipPHIsLabelsAndDebug(iterator I, Register Reg = Register(),
+                                  bool SkipPseudoOp = true);
 
   /// Returns an iterator to the first terminator instruction of this basic
   /// block. If a terminator does not exist, it returns end().
diff --git a/llvm/include/llvm/CodeGen/TargetInstrInfo.h b/llvm/include/llvm/CodeGen/TargetInstrInfo.h
index 8e7499ac626a747..0487fd644d0ddf3 100644
--- a/llvm/include/llvm/CodeGen/TargetInstrInfo.h
+++ b/llvm/include/llvm/CodeGen/TargetInstrInfo.h
@@ -1989,7 +1989,8 @@ class TargetInstrInfo : public MCInstrInfo {
   /// True if the instruction is bound to the top of its basic block and no
   /// other instructions shall be inserted before it. This can be implemented
   /// to prevent register allocator to insert spills before such instructions.
-  virtual bool isBasicBlockPrologue(const MachineInstr &MI) const {
+  virtual bool isBasicBlockPrologue(const MachineInstr &MI,
+                                    Register Reg) const {
     return false;
   }
 
diff --git a/llvm/lib/CodeGen/FixupStatepointCallerSaved.cpp b/llvm/lib/CodeGen/FixupStatepointCallerSaved.cpp
index 75504ef32250c52..4d668c53f7156b8 100644
--- a/llvm/lib/CodeGen/FixupStatepointCallerSaved.cpp
+++ b/llvm/lib/CodeGen/FixupStatepointCallerSaved.cpp
@@ -461,7 +461,8 @@ class StatepointState {
 
       if (EHPad && !RC.hasReload(Reg, RegToSlotIdx[Reg], EHPad)) {
         RC.recordReload(Reg, RegToSlotIdx[Reg], EHPad);
-        auto EHPadInsertPoint = EHPad->SkipPHIsLabelsAndDebug(EHPad->begin());
+        auto EHPadInsertPoint =
+            EHPad->SkipPHIsLabelsAndDebug(EHPad->begin(), Reg);
         insertReloadBefore(Reg, EHPadInsertPoint, EHPad);
         LLVM_DEBUG(dbgs() << "...also reload at EHPad "
                           << printMBBReference(*EHPad) << "\n");
diff --git a/llvm/lib/CodeGen/InlineSpiller.cpp b/llvm/lib/CodeGen/InlineSpiller.cpp
index 71d58b2e9e18d7d..2740265f75340b5 100644
--- a/llvm/lib/CodeGen/InlineSpiller.cpp
+++ b/llvm/lib/CodeGen/InlineSpiller.cpp
@@ -469,7 +469,7 @@ bool InlineSpiller::hoistSpillInsideBB(LiveInterval &SpillLI,
   MachineBasicBlock *MBB = LIS.getMBBFromIndex(SrcVNI->def);
   MachineBasicBlock::iterator MII;
   if (SrcVNI->isPHIDef())
-    MII = MBB->SkipPHIsLabelsAndDebug(MBB->begin());
+    MII = MBB->SkipPHIsLabelsAndDebug(MBB->begin(), SrcReg);
   else {
     MachineInstr *DefMI = LIS.getInstructionFromIndex(SrcVNI->def);
     assert(DefMI && "Defining instruction disappeared");
diff --git a/llvm/lib/CodeGen/MachineBasicBlock.cpp b/llvm/lib/CodeGen/MachineBasicBlock.cpp
index d9e22685faf5f5e..0c92d5d910dc14e 100644
--- a/llvm/lib/CodeGen/MachineBasicBlock.cpp
+++ b/llvm/lib/CodeGen/MachineBasicBlock.cpp
@@ -212,7 +212,7 @@ MachineBasicBlock::SkipPHIsAndLabels(MachineBasicBlock::iterator I) {
 
   iterator E = end();
   while (I != E && (I->isPHI() || I->isPosition() ||
-                    TII->isBasicBlockPrologue(*I)))
+                    TII->isBasicBlockPrologue(*I, Register())))
     ++I;
   // FIXME: This needs to change if we wish to bundle labels
   // inside the bundle.
@@ -223,13 +223,13 @@ MachineBasicBlock::SkipPHIsAndLabels(MachineBasicBlock::iterator I) {
 
 MachineBasicBlock::iterator
 MachineBasicBlock::SkipPHIsLabelsAndDebug(MachineBasicBlock::iterator I,
-                                          bool SkipPseudoOp) {
+                                          Register Reg, bool SkipPseudoOp) {
   const TargetInstrInfo *TII = getParent()->getSubtarget().getInstrInfo();
 
   iterator E = end();
   while (I != E && (I->isPHI() || I->isPosition() || I->isDebugInstr() ||
                     (SkipPseudoOp && I->isPseudoProbe()) ||
-                    TII->isBasicBlockPrologue(*I)))
+                    TII->isBasicBlockPrologue(*I, Reg)))
     ++I;
   // FIXME: This needs to change if we wish to bundle labels / dbg_values
   // inside the bundle.
diff --git a/llvm/lib/CodeGen/MachineSink.cpp b/llvm/lib/CodeGen/MachineSink.cpp
index 9ed6b22eae972cc..1da6ccccbd540c4 100644
--- a/llvm/lib/CodeGen/MachineSink.cpp
+++ b/llvm/lib/CodeGen/MachineSink.cpp
@@ -291,7 +291,7 @@ static bool blockPrologueInterferes(const MachineBasicBlock *BB,
   for (MachineBasicBlock::const_iterator PI = BB->getFirstNonPHI(); PI != End;
        ++PI) {
     // Only check target defined prologue instructions
-    if (!TII->isBasicBlockPrologue(*PI))
+    if (!TII->isBasicBlockPrologue(*PI, Register()))
       continue;
     for (auto &MO : MI.operands()) {
       if (!MO.isReg())
diff --git a/llvm/lib/CodeGen/RegAllocFast.cpp b/llvm/lib/CodeGen/RegAllocFast.cpp
index 9f3c17e2799ceae..03f8c26ad2d0f8f 100644
--- a/llvm/lib/CodeGen/RegAllocFast.cpp
+++ b/llvm/lib/CodeGen/RegAllocFast.cpp
@@ -514,7 +514,7 @@ RegAllocFast::getMBBBeginInsertionPoint(
     }
 
     // Most reloads should be inserted after prolog instructions.
-    if (!TII->isBasicBlockPrologue(*I))
+    if (!TII->isBasicBlockPrologue(*I, Register()))
       break;
 
     // However if a prolog instruction reads a register that needs to be
diff --git a/llvm/lib/CodeGen/SplitKit.cpp b/llvm/lib/CodeGen/SplitKit.cpp
index 1664c304f643c3f..b1c862210932bc3 100644
--- a/llvm/lib/CodeGen/SplitKit.cpp
+++ b/llvm/lib/CodeGen/SplitKit.cpp
@@ -795,8 +795,10 @@ SlotIndex SplitEditor::leaveIntvAtTop(MachineBasicBlock &MBB) {
     return Start;
   }
 
-  VNInfo *VNI = defFromParent(0, ParentVNI, Start, MBB,
-                              MBB.SkipPHIsLabelsAndDebug(MBB.begin()));
+  unsigned RegIdx = 0;
+  Register Reg = LIS.getInterval(Edit->get(RegIdx)).reg();
+  VNInfo *VNI = defFromParent(RegIdx, ParentVNI, Start, MBB,
+                              MBB.SkipPHIsLabelsAndDebug(MBB.begin(), Reg));
   RegAssign.insert(Start, VNI->def, OpenIdx);
   LLVM_DEBUG(dump());
   return VNI->def;
diff --git a/llvm/lib/Target/AMDGPU/SIFixSGPRCopies.cpp b/llvm/lib/Target/AMDGPU/SIFixSGPRCopies.cpp
index 86980ee851bb1fc..1cb0e96e0e22a2a 100644
--- a/llvm/lib/Target/AMDGPU/SIFixSGPRCopies.cpp
+++ b/llvm/lib/Target/AMDGPU/SIFixSGPRCopies.cpp
@@ -424,7 +424,7 @@ static bool isReachable(const MachineInstr *From,
 static MachineBasicBlock::iterator
 getFirstNonPrologue(MachineBasicBlock *MBB, const TargetInstrInfo *TII) {
   MachineBasicBlock::iterator I = MBB->getFirstNonPHI();
-  while (I != MBB->end() && TII->isBasicBlockPrologue(*I))
+  while (I != MBB->end() && TII->isBasicBlockPrologue(*I, Register()))
     ++I;
 
   return I;
@@ -578,7 +578,7 @@ static bool hoistAndMergeSGPRInits(unsigned Reg,
       MachineBasicBlock::reverse_iterator B(BoundaryMI);
       // Check if B should actually be a boundary. If not set the previous
       // instruction as the boundary instead.
-      if (!TII->isBasicBlockPrologue(*B))
+      if (!TII->isBasicBlockPrologue(*B, Register()))
         B++;
 
       auto R = std::next(MI->getReverseIterator());
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
index 027b695c3bb1a74..09682a387b45a86 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.cpp
@@ -8474,16 +8474,24 @@ unsigned SIInstrInfo::getLiveRangeSplitOpcode(Register SrcReg,
   return AMDGPU::COPY;
 }
 
-bool SIInstrInfo::isBasicBlockPrologue(const MachineInstr &MI) const {
+bool SIInstrInfo::isBasicBlockPrologue(const MachineInstr &MI,
+                                       Register Reg) const {
   // We need to handle instructions which may be inserted during register
   // allocation to handle the prolog. The initial prolog instruction may have
   // been separated from the start of the block by spills and copies inserted
-  // needed by the prolog.
+  // needed by the prolog. However, the insertions for scalar registers can
+  // always be placed at the BB top as they are independent of the exec mask
+  // value.
   uint16_t Opc = MI.getOpcode();
+  const MachineFunction *MF = MI.getParent()->getParent();
+  const MachineRegisterInfo &MRI = MF->getRegInfo();
+  bool IsNullOrVectorRegister =
+      !Reg || !RI.isSGPRClass(RI.getRegClassForReg(MRI, Reg));
 
   // FIXME: Copies inserted in the block prolog for live-range split should also
   // be included.
-  return (isSpillOpcode(Opc) || (!MI.isTerminator() && Opc != AMDGPU::COPY &&
+  return IsNullOrVectorRegister &&
+         (isSpillOpcode(Opc) || (!MI.isTerminator() && Opc != AMDGPU::COPY &&
                                  MI.modifiesRegister(AMDGPU::EXEC, &RI)));
 }
 
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.h b/llvm/lib/Target/AMDGPU/SIInstrInfo.h
index 29f549fc29a3ce6..6834a661196ed75 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.h
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.h
@@ -1179,7 +1179,8 @@ class SIInstrInfo final : public AMDGPUGenInstrInfo {
   unsigned getLiveRangeSplitOpcode(Register Reg,
                                    const MachineFunction &MF) const override;
 
-  bool isBasicBlockPrologue(const MachineInstr &MI) const override;
+  bool isBasicBlockPrologue(const MachineInstr &MI,
+                            Register Reg) const override;
 
   MachineInstr *createPHIDestinationCopy(MachineBasicBlock &MBB,
                                          MachineBasicBlock::iterator InsPt,
diff --git a/llvm/test/CodeGen/AMDGPU/identical-subrange-spill-infloop.ll b/llvm/test/CodeGen/AMDGPU/identical-subrange-spill-infloop.ll
index 8098304d134229d..ffbf00765adbe22 100644
--- a/llvm/test/CodeGen/AMDGPU/identical-subrange-spill-infloop.ll
+++ b/llvm/test/CodeGen/AMDGPU/identical-subrange-spill-infloop.ll
@@ -168,7 +168,6 @@ define void @main(i1 %arg) #0 {
 ; CHECK-NEXT:    s_mov_b64 vcc, vcc
 ; CHECK-NEXT:    s_cbranch_vccnz .LBB0_2
 ; CHECK-NEXT:  .LBB0_3: ; %Flow14
-; CHECK-NEXT:    s_or_saveexec_b64 s[20:21], s[26:27]
 ; CHECK-NEXT:    s_waitcnt lgkmcnt(0)
 ; CHECK-NEXT:    v_readlane_b32 s12, v5, 32
 ; CHECK-NEXT:    v_readlane_b32 s13, v5, 33
@@ -178,39 +177,39 @@ define void @main(i1 %arg) #0 {
 ; CHECK-NEXT:    v_readlane_b32 s17, v5, 37
 ; CHECK-NEXT:    v_readlane_b32 s18, v5, 38
 ; CHECK-NEXT:    v_readlane_b32 s19, v5, 39
-; CHECK-NEXT:    v_writelane_b32 v5, s4, 56
-; CHECK-NEXT:    v_writelane_b32 v5, s5, 57
-; CHECK-NEXT:    v_writelane_b32 v5, s6, 58
-; CHECK-NEXT:    v_writelane_b32 v5, s7, 59
-; CHECK-NEXT:    v_writelane_b32 v5, s8, 60
-; CHECK-NEXT:    v_writelane_b32 v5, s9, 61
-; CHECK-NEXT:    v_writelane_b32 v5, s10, 62
-; CHECK-NEXT:    v_writelane_b32 v5, s11, 63
-; CHECK-NEXT:    v_writelane_b32 v5, s52, 40
-; CHECK-NEXT:    v_writelane_b32 v5, s53, 41
-; CHECK-NEXT:    v_writelane_b32 v5, s54, 42
-; CHECK-NEXT:    v_writelane_b32 v5, s55, 43
-; CHECK-NEXT:    v_writelane_b32 v5, s56, 44
-; CHECK-NEXT:    v_writelane_b32 v5, s57, 45
-; CHECK-NEXT:    v_writelane_b32 v5, s58, 46
-; CHECK-NEXT:    v_writelane_b32 v5, s59, 47
-; CHECK-NEXT:    v_writelane_b32 v4, s12, 0
-; CHECK-NEXT:    v_writelane_b32 v5, s60, 48
-; CHECK-NEXT:    v_writelane_b32 v4, s13, 1
-; CHECK-NEXT:    v_writelane_b32 v5, s61, 49
-; CHECK-NEXT:    v_writelane_b32 v4, s14, 2
-; CHECK-NEXT:    v_writelane_b32 v5, s62, 50
-; CHECK-NEXT:    v_writelane_b32 v4, s15, 3
-; CHECK-NEXT:    v_writelane_b32 v5, s63, 51
-; CHECK-NEXT:    v_writelane_b32 v4, s16, 4
-; CHECK-NEXT:    v_writelane_b32 v5, s64, 52
-; CHECK-NEXT:    v_writelane_b32 v4, s17, 5
-; CHECK-NEXT:    v_writelane_b32 v5, s65, 53
-; CHECK-NEXT:    v_writelane_b32 v4, s18, 6
-; CHECK-NEXT:    v_writelane_b32 v5, s66, 54
-; CHECK-NEXT:    v_writelane_b32 v4, s19, 7
-; CHECK-NEXT:    v_writelane_b32 v5, s67, 55
-; CHECK-NEXT:    s_xor_b64 exec, exec, s[20:21]
+; CHECK-NEXT:    v_writelane_b32 v5, s4, 40
+; CHECK-NEXT:    v_writelane_b32 v5, s5, 41
+; CHECK-NEXT:    v_writelane_b32 v5, s6, 42
+; CHECK-NEXT:    v_writelane_b32 v5, s7, 43
+; CHECK-NEXT:    v_writelane_b32 v5, s8, 44
+; CHECK-NEXT:    v_writelane_b32 v5, s9, 45
+; CHECK-NEXT:    v_writelane_b32 v5, s10, 46
+; CHECK-NEXT:    v_writelane_b32 v5, s11, 47
+; CHECK-NEXT:    v_writelane_b32 v5, s12, 48
+; CHECK-NEXT:    v_writelane_b32 v5, s13, 49
+; CHECK-NEXT:    v_writelane_b32 v5, s14, 50
+; CHECK-NEXT:    v_writelane_b32 v5, s15, 51
+; CHECK-NEXT:    v_writelane_b32 v5, s16, 52
+; CHECK-NEXT:    v_writelane_b32 v5, s17, 53
+; CHECK-NEXT:    v_writelane_b32 v5, s18, 54
+; CHECK-NEXT:    v_writelane_b32 v5, s19, 55
+; CHECK-NEXT:    v_writelane_b32 v5, s52, 56
+; CHECK-NEXT:    v_writelane_b32 v4, s60, 0
+; CHECK-NEXT:    v_writelane_b32 v5, s53, 57
+; CHECK-NEXT:    v_writelane_b32 v4, s61, 1
+; CHECK-NEXT:    v_writelane_b32 v5, s54, 58
+; CHECK-NEXT:    v_writelane_b32 v4, s62, 2
+; CHECK-NEXT:    v_writelane_b32 v5, s55, 59
+; CHECK-NEXT:    v_writelane_b32 v4, s63, 3
+; CHECK-NEXT:    v_writelane_b32 v5, s56, 60
+; CHECK-NEXT:    v_writelane_b32 v4, s64, 4
+; CHECK-NEXT:    v_writelane_b32 v5, s57, 61
+; CHECK-NEXT:    v_writelane_b32 v4, s65, 5
+; CHECK-NEXT:    v_writelane_b32 v5, s58, 62
+; CHECK-NEXT:    v_writelane_b32 v4, s66, 6
+; CHECK-NEXT:    v_writelane_b32 v5, s59, 63
+; CHECK-NEXT:    v_writelane_b32 v4, s67, 7
+; CHECK-NEXT:    s_andn2_saveexec_b64 s[20:21], s[26:27]
 ; CHECK-NEXT:    s_cbranch_execz .LBB0_10
 ; CHECK-NEXT:  ; %bb.4: ; %bb32
 ; CHECK-NEXT:    s_and_saveexec_b64 s[8:9], s[24:25]
@@ -265,39 +264,35 @@ define void @main(i1 %arg) #0 {
 ; CHECK-NEXT:    s_waitcnt vmcnt(1)
 ; CHECK-NEXT:    buffer_store_dwordx4 v[2:5], off, s[8:11], 0
 ; CHECK-NEXT:  .LBB0_6: ; %Flow12
-; CHECK-NEXT:    s_andn2_saveexec_b64 s[4:5], s[22:23]
+; CHECK-NEXT:    s_or_saveexec_b64 s[4:5], s[22:23]
+; CHECK-NEXT:    v_readlane_b32 s52, v5, 40
+; CHECK-NEXT:    v_readlane_b32 s53, v5, 41
+; CHECK-NEXT:    v_readlane_b32 s54, v5, 42
+; CHECK-NEXT:    v_readlane_b32 s55, v5, 43
+; CHECK-NEXT:    v_readlane_b32 s56, v5, 44
+; CHECK-NEXT:    v_readlane_b32 s57, v5, 45
+; CHECK-NEXT:    v_readlane_b32 s58, v5, 46
+; CHECK-NEXT:    v_readlane_b32 s59, v5, 47
+; CHECK-NEXT:    v_readlane_b32 s60, v5, 48
+; CHECK-NEXT:    v_readlane_b32 s61, v5, 49
+; CHECK-NEXT:    v_readlane_b32 s62, v5, 50
+; CHECK-NEXT:    v_readlane_b32 s63, v5, 51
+; CHECK-NEXT:    v_readlane_b32 s64, v5, 52
+; CHECK-NEXT:    v_readlane_b32 s65, v5, 53
+; CHECK-NEXT:    v_readlane_b32 s66, v5, 54
+; CHECK-NEXT:    v_readlane_b32 s67, v5, 55
+; CHECK-NEXT:    s_xor_b64 exec, exec, s[4:5]
 ; CHECK-NEXT:    s_cbranch_execz .LBB0_9
 ; CHECK-NEXT:  ; %bb.7: ; %bb33.preheader
 ; CHECK-NEXT:    s_mov_b32 s8, 0
 ; CHECK-NEXT:    s_mov_b32 s6, s8
-; CHECK-NEXT:    v_readlane_b32 s36, v5, 40
 ; CHECK-NEXT:    s_mov_b32 s7, s8
 ; CHECK-NEXT:    v_mov_b32_e32 v2, s6
-; CHECK-NEXT:    v_readlane_b32 s37, v5, 41
+; CHECK-NEXT:    v_readlane_b32 s36, v5, 56
 ; CHECK-NEXT:    s_mov_b32 s9, s8
 ; CHECK-NEXT:    s_mov_b32 s10, s8
 ; CHECK-NEXT:    s_mov_b32 s11, s8
 ; CHECK-NEXT:    v_mov_b32_e32 v3, s7
-; CHECK-NEXT:    v_readlane_b32 s38, v5, 42
-; CHECK-NEXT:    v_readlane_b32 s39, v5, 43
-; CHECK-NEXT:    v_readlane_b32 s40, v5, 44
-; CHECK-NEXT:    v_readlane_b32 s41, v5, 45
-; CHECK-NEXT:    v_readlane_b32 s42, v5, 46
-; CHECK-NEXT:    v_readlane_b32 s43, v5, 47
-; CHECK-NEXT:    v_readlane_b32 s44, v5, 48
-; CHECK-NEXT:    v_readlane_b32 s45, v5, 49
-; CHECK-NEXT:    v_readlane_b32 s46, v5, 50
-; CHECK-NEXT:    v_readlane_b32 s47, v5, 51
-; CHECK-NEXT:    v_readlane_b32 s48, v5, 52
-; CHECK-NEXT:    v_readlane_b32 s49, v5, 53
-; CHECK-NEXT:    v_readlane_b32 s50, v5, 54
-; CHECK-NEXT:    v_readlane_b32 s51, v5, 55
-; CHECK-NEXT:    s_mov_b64 s[12:13], s[36:37]
-; CHECK-NEXT:    s_mov_b64 s[14:15], s[38:39]
-; CHECK-NEXT:    s_mov_b64 s[16:17], s[40:41]
-; CHECK-NEXT:    s_mov_b64 s[18:19], s[42:43]
-; CHECK-NEXT:    image_sample_lz v6, v[2:3], s[36:43], s[8:11] dmask:0x1
-; CHECK-NEXT:    v_readlane_b32 s36, v5, 56
 ; CHECK-NEXT:    v_readlane_b32 s37, v5, 57
 ; CHECK-NEXT:    v_readlane_b32 s38, v5, 58
 ; CHECK-NEXT:    v_readlane_b32 s39, v5, 59
@@ -305,19 +300,25 @@ define void @main(i1 %arg) #0 {
 ; CHECK-NEXT:    v_readlane_b32 s41, v5, 61
 ; CHECK-NEXT:    v_readlane_b32 s42, v5, 62
 ; CHECK-NEXT:    v_readlane_b32 s43, v5, 63
+; CHECK-NEXT:    s_nop 4
+; CHECK-NEXT:    image_sample_lz v6, v[2:3], s[36:43], s[8:11] dmask:0x1
+; CHECK-NEXT:    image_sample_lz v7, v[2:3], s[52:59], s[8:11] dmask:0x1
 ; CHECK-NEXT:    ; kill: killed $vgpr2_vgpr3
+; CHECK-NEXT:    s_mov_b64 s[12:13], s[36:37]
 ; CHECK-NEXT:    s_and_b64 vcc, exec, 0
 ; CHECK-NEXT:    v_readlane_b32 s44, v4, 0
 ; CHECK-NEXT:    v_readlane_b32 s45, v4, 1
 ; CHECK-NEXT:    v_readlane_b32 s46, v4, 2
 ; CHECK-NEXT:    v_readlane_b32 s47, v4, 3
-; CHECK-NEXT:    image_sample_lz v7, v[2:3], s[36:43], s[8:11] dmask:0x1
 ; CHECK-NEXT:    v_readlane_b32 s48, v4, 4
 ; CHECK-NEXT:    v_readlane_b32 s49, v4, 5
 ; CHECK-NEXT:    v_readlane_b32 s50, v4, 6
 ; CHECK-NEXT:    v_readlane_b32 s51, v4, 7
+; CHECK-NEXT:    s_mov_b64 s[14:15], s[38:39]
+; CHECK-NEXT:    s_mov_b64 s[16:17], s[40:41]
+; CHECK-NEXT:    s_mov_b64 s[18:19], s[42:43]
 ; CHECK-NEXT:    ; kill: killed $sgpr12_sgpr13_sgpr14_sgpr15_sgpr16_sgpr17_sgpr18_sgpr19
-; CHECK-NEXT:    ; kill: killed $sgpr36_sgpr37_sgpr38_sgpr39_sgpr40_sgpr41_sgpr42_sgpr43
+; CHECK-NEXT:    ; kill: killed $sgpr52_sgpr53_sgpr54_sgpr55_sgpr56_sgpr57_sgpr58_sgpr59
 ; CHECK-NEXT:    ; kill: killed $sgpr8_sgpr9_sgpr10 killed $sgpr11
 ; CHECK-NEXT:    s_waitcnt vmcnt(0)
 ; CHECK-NEXT:    v_sub_f32_e32 v2, v7, v6

Copy link
Collaborator

@rampitec rampitec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You also need a specific small test for the situation.

@@ -846,8 +846,10 @@ class MachineBasicBlock

/// Return the first instruction in MBB after I that is not a PHI, label or
/// debug. This is the correct point to insert copies at the beginning of a
/// basic block.
iterator SkipPHIsLabelsAndDebug(iterator I, bool SkipPseudoOp = true);
/// basic block. The optional arg \p Reg would allow targets to apply some
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description of Reg does not seem to describe anything. What is this register?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on the description not being descriptive enough.

@@ -1989,7 +1989,8 @@ class TargetInstrInfo : public MCInstrInfo {
/// True if the instruction is bound to the top of its basic block and no
/// other instructions shall be inserted before it. This can be implemented
/// to prevent register allocator to insert spills before such instructions.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reg needs to be described.

// needed by the prolog.
// needed by the prolog. However, the insertions for scalar registers can
// always be placed at the BB top as they are independent of the exec mask
// value.
uint16_t Opc = MI.getOpcode();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can avoid executing this for empty Reg.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still the whole block can be avoided if Reg is unset.

llvm/include/llvm/CodeGen/TargetInstrInfo.h Show resolved Hide resolved
cdevadas added a commit that referenced this pull request Nov 14, 2023
We adjust the insertion point at the BB top for spills/copies
during RA to ensure they are placed after the exec restore
instructions required for the divergent control flow execution.
This is, however, required only for the vector operations. The
insertions for scalar registers can still go at the BB top.
@cdevadas
Copy link
Collaborator Author

Lit test precheckin + rebase + comments addressed.

/// basic block.
iterator SkipPHIsLabelsAndDebug(iterator I, bool SkipPseudoOp = true);
/// basic block. \p Reg is an optional argument passed during register
/// allocator to have additional target specific checks for its spill/copy
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

allocator -> allocation. I also believe it shall be spill/restore/copy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still a roundabout way of saying what the register value is. Say something that expresses this is the register being defined for a spill/split

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking this should be a differently named function, specifically for the spill insert point

// needed by the prolog.
// needed by the prolog. However, the insertions for scalar registers can
// always be placed at the BB top as they are independent of the exec mask
// value.
uint16_t Opc = MI.getOpcode();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still the whole block can be avoided if Reg is unset.

@cdevadas
Copy link
Collaborator Author

Review comments addressed.

Copy link
Collaborator

@rampitec rampitec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with a small nit.

@@ -846,8 +846,10 @@ class MachineBasicBlock

/// Return the first instruction in MBB after I that is not a PHI, label or
/// debug. This is the correct point to insert copies at the beginning of a
/// basic block.
iterator SkipPHIsLabelsAndDebug(iterator I, bool SkipPseudoOp = true);
/// basic block. \p Reg is the register being defined for a spill/split during
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually spill does not define a register, it kills it. Restore defines register.

Copy link
Collaborator

@rampitec rampitec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@cdevadas cdevadas merged commit ce7fd49 into llvm:main Nov 16, 2023
3 checks passed
zahiraam pushed a commit to zahiraam/llvm-project that referenced this pull request Nov 20, 2023
zahiraam pushed a commit to zahiraam/llvm-project that referenced this pull request Nov 20, 2023
…2140)

We adjust the insertion point at the BB top for spills/copies during RA
to ensure they are placed after the exec restore instructions required
for the divergent control flow execution. This is, however, required
only for the vector operations. The insertions for scalar registers can
still go to the BB top.
rocm-ci pushed a commit to ROCm/llvm-project that referenced this pull request Dec 15, 2023
Change-Id: I8f5e2bc14ab9ae77d0c3a4571549eaff02bd9cf9
rocm-ci pushed a commit to ROCm/llvm-project that referenced this pull request Dec 15, 2023
…2140)

We adjust the insertion point at the BB top for spills/copies during RA
to ensure they are placed after the exec restore instructions required
for the divergent control flow execution. This is, however, required
only for the vector operations. The insertions for scalar registers can
still go to the BB top.

Change-Id: I0ee60b84c53c73d65d8bc9b6fdfc0bcb1e86c4fe
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants