Skip to content

Commit

Permalink
[AArch64] [BranchRelaxation] Optimize for hot code size in AArch64 br…
Browse files Browse the repository at this point in the history
…anch relaxation

On AArch64, it is safe to let the linker handle relaxation of
unconditional branches; in most cases, the destination is within range,
and the linker doesn't need to do anything. If the linker does insert
fixup code, it clobbers the x16 inter-procedural register, so x16 must
be available across the branch before linking. If x16 isn't available,
but some other register is, we can relax the branch either by spilling
x16 OR using the free register for a manually-inserted indirect branch.

This patch builds on D145211. While that patch is for correctness, this
one is for performance of the common case. As noted in
https://reviews.llvm.org/D145211#4537173, we can trust the linker to
relax cross-section unconditional branches across which x16 is
available.

Programs that use machine function splitting care most about the
performance of hot code at the expense of the performance of cold code,
so we prioritize minimizing hot code size.

Here's a breakdown of the cases:

   Hot -> Cold [x16 is free across the branch]
     Do nothing; let the linker relax the branch.

   Cold -> Hot [x16 is free across the branch]
     Do nothing; let the linker relax the branch.

   Hot -> Cold [x16 used across the branch, but there is a free register]
     Spill x16; let the linker relax the branch.

     Spilling requires fewer instructions than manually inserting an
     indirect branch.

   Cold -> Hot [x16 used across the branch, but there is a free register]
     Manually insert an indirect branch.

     Spilling would require adding a restore block in the hot section.

   Hot -> Cold [No free regs]
     Spill x16; let the linker relax the branch.

   Cold -> Hot [No free regs]
     Spill x16 and put the restore block at the end of the hot function; let the linker relax the branch.
     Ex:
       [Hot section]
       func.hot:
         ... hot code...
       func.restore:
         ... restore x16 ...
         B func.hot

       [Cold section]
         func.cold:
         ... spill x16 ...
         B func.restore

     Putting the restore block at the end of the function instead of
     just before the destination increases the cost of executing the
     store, but it avoids putting cold code in the middle of hot code.
     Since the restore is very rarely taken, this is a worthwhile
     tradeoff.

Differential Revision: https://reviews.llvm.org/D156767
  • Loading branch information
dhoekwater committed Sep 6, 2023
1 parent a4b82f7 commit 866ae69
Show file tree
Hide file tree
Showing 4 changed files with 108 additions and 55 deletions.
37 changes: 34 additions & 3 deletions llvm/lib/CodeGen/BranchRelaxation.cpp
Expand Up @@ -83,6 +83,8 @@ class BranchRelaxation : public MachineFunctionPass {
// The basic block after which trampolines are inserted. This is the last
// basic block that isn't in the cold section.
MachineBasicBlock *TrampolineInsertionPoint = nullptr;
SmallDenseSet<std::pair<MachineBasicBlock *, MachineBasicBlock *>>
RelaxedUnconditionals;
std::unique_ptr<RegScavenger> RS;
LivePhysRegs LiveRegs;

Expand Down Expand Up @@ -148,7 +150,8 @@ void BranchRelaxation::verify() {
if (MI.getOpcode() == TargetOpcode::FAULTING_OP)
continue;
MachineBasicBlock *DestBB = TII->getBranchDestBlock(MI);
assert(isBlockInRange(MI, *DestBB));
assert(isBlockInRange(MI, *DestBB) ||
RelaxedUnconditionals.contains({&MBB, DestBB}));
}
}
#endif
Expand All @@ -170,7 +173,9 @@ LLVM_DUMP_METHOD void BranchRelaxation::dumpBBs() {
void BranchRelaxation::scanFunction() {
BlockInfo.clear();
BlockInfo.resize(MF->getNumBlockIDs());

TrampolineInsertionPoint = nullptr;
RelaxedUnconditionals.clear();

// First thing, compute the size of all basic blocks, and see if the function
// has any inline assembly in it. If so, we have to be conservative about
Expand Down Expand Up @@ -562,6 +567,8 @@ bool BranchRelaxation::fixupUnconditionalBranch(MachineInstr &MI) {
BranchBB->sortUniqueLiveIns();
BranchBB->addSuccessor(DestBB);
MBB->replaceSuccessor(DestBB, BranchBB);
if (TrampolineInsertionPoint == MBB)
TrampolineInsertionPoint = BranchBB;
}

DebugLoc DL = MI.getDebugLoc();
Expand All @@ -585,8 +592,28 @@ bool BranchRelaxation::fixupUnconditionalBranch(MachineInstr &MI) {
BlockInfo[BranchBB->getNumber()].Size = computeBlockSize(*BranchBB);
adjustBlockOffsets(*MBB);

// If RestoreBB is required, try to place just before DestBB.
// If RestoreBB is required, place it appropriately.
if (!RestoreBB->empty()) {
// If the jump is Cold -> Hot, don't place the restore block (which is
// cold) in the middle of the function. Place it at the end.
if (MBB->getSectionID() == MBBSectionID::ColdSectionID &&
DestBB->getSectionID() != MBBSectionID::ColdSectionID) {
MachineBasicBlock *NewBB = createNewBlockAfter(*TrampolineInsertionPoint);
TII->insertUnconditionalBranch(*NewBB, DestBB, DebugLoc());
BlockInfo[NewBB->getNumber()].Size = computeBlockSize(*NewBB);

// New trampolines should be inserted after NewBB.
TrampolineInsertionPoint = NewBB;

// Retarget the unconditional branch to the trampoline block.
BranchBB->replaceSuccessor(DestBB, NewBB);
NewBB->addSuccessor(DestBB);

DestBB = NewBB;
}

// In all other cases, try to place just before DestBB.

// TODO: For multiple far branches to the same destination, there are
// chances that some restore blocks could be shared if they clobber the
// same registers and share the same restore sequence. So far, those
Expand Down Expand Up @@ -616,9 +643,11 @@ bool BranchRelaxation::fixupUnconditionalBranch(MachineInstr &MI) {
RestoreBB->setSectionID(DestBB->getSectionID());
RestoreBB->setIsBeginSection(DestBB->isBeginSection());
DestBB->setIsBeginSection(false);
RelaxedUnconditionals.insert({BranchBB, RestoreBB});
} else {
// Remove restore block if it's not required.
MF->erase(RestoreBB);
RelaxedUnconditionals.insert({BranchBB, DestBB});
}

return true;
Expand All @@ -644,7 +673,8 @@ bool BranchRelaxation::relaxBranchInstructions() {
// Unconditional branch destination might be unanalyzable, assume these
// are OK.
if (MachineBasicBlock *DestBB = TII->getBranchDestBlock(*Last)) {
if (!isBlockInRange(*Last, *DestBB) && !TII->isTailCall(*Last)) {
if (!isBlockInRange(*Last, *DestBB) && !TII->isTailCall(*Last) &&
!RelaxedUnconditionals.contains({&MBB, DestBB})) {
fixupUnconditionalBranch(*Last);
++NumUnconditionalRelaxed;
Changed = true;
Expand Down Expand Up @@ -724,6 +754,7 @@ bool BranchRelaxation::runOnMachineFunction(MachineFunction &mf) {
LLVM_DEBUG(dbgs() << " Basic blocks after relaxation\n\n"; dumpBBs());

BlockInfo.clear();
RelaxedUnconditionals.clear();

return MadeChange;
}
28 changes: 19 additions & 9 deletions llvm/lib/Target/AArch64/AArch64InstrInfo.cpp
Expand Up @@ -283,30 +283,40 @@ void AArch64InstrInfo::insertIndirectBranch(MachineBasicBlock &MBB,
};

RS->enterBasicBlockEnd(MBB);
Register Reg = RS->FindUnusedReg(&AArch64::GPR64RegClass);

// If there's a free register, manually insert the indirect branch using it.
if (Reg != AArch64::NoRegister) {
buildIndirectBranch(Reg, NewDestBB);
// If X16 is unused, we can rely on the linker to insert a range extension
// thunk if NewDestBB is out of range of a single B instruction.
constexpr Register Reg = AArch64::X16;
if (!RS->isRegUsed(Reg)) {
insertUnconditionalBranch(MBB, &NewDestBB, DL);
RS->setRegUsed(Reg);
return;
}

// Otherwise, spill and use X16. This briefly moves the stack pointer, making
// it incompatible with red zones.
// If there's a free register and it's worth inflating the code size,
// manually insert the indirect branch.
Register Scavenged = RS->FindUnusedReg(&AArch64::GPR64RegClass);
if (Scavenged != AArch64::NoRegister &&
MBB.getSectionID() == MBBSectionID::ColdSectionID) {
buildIndirectBranch(Scavenged, NewDestBB);
RS->setRegUsed(Scavenged);
return;
}

// Note: Spilling X16 briefly moves the stack pointer, making it incompatible
// with red zones.
AArch64FunctionInfo *AFI = MBB.getParent()->getInfo<AArch64FunctionInfo>();
if (!AFI || AFI->hasRedZone().value_or(true))
report_fatal_error(
"Unable to insert indirect branch inside function that has red zone");

Reg = AArch64::X16;
// Otherwise, spill X16 and defer range extension to the linker.
BuildMI(MBB, MBB.end(), DL, get(AArch64::STRXpre))
.addReg(AArch64::SP, RegState::Define)
.addReg(Reg)
.addReg(AArch64::SP)
.addImm(-16);

buildIndirectBranch(Reg, RestoreBB);
BuildMI(MBB, MBB.end(), DL, get(AArch64::B)).addMBB(&RestoreBB);

BuildMI(RestoreBB, RestoreBB.end(), DL, get(AArch64::LDRXpost))
.addReg(AArch64::SP, RegState::Define)
Expand Down
27 changes: 14 additions & 13 deletions llvm/test/CodeGen/AArch64/branch-relax-b.ll
Expand Up @@ -6,9 +6,7 @@ define void @relax_b_nospill(i1 zeroext %0) {
; CHECK-NEXT: tbnz w0,
; CHECK-SAME: LBB0_1
; CHECK-NEXT: // %bb.3: // %entry
; CHECK-NEXT: adrp [[SCAVENGED_REGISTER:x[0-9]+]], .LBB0_2
; CHECK-NEXT: add [[SCAVENGED_REGISTER]], [[SCAVENGED_REGISTER]], :lo12:.LBB0_2
; CHECK-NEXT: br [[SCAVENGED_REGISTER]]
; CHECK-NEXT: b .LBB0_2
; CHECK-NEXT: .LBB0_1: // %iftrue
; CHECK-NEXT: //APP
; CHECK-NEXT: .zero 2048
Expand Down Expand Up @@ -44,9 +42,7 @@ define void @relax_b_spill() {
; CHECK-NEXT: // %bb.4: // %entry
; CHECK-NEXT: str [[SPILL_REGISTER:x[0-9]+]], [sp,
; CHECK-SAME: -16]!
; CHECK-NEXT: adrp [[SPILL_REGISTER]], .LBB1_5
; CHECK-NEXT: add [[SPILL_REGISTER]], [[SPILL_REGISTER]], :lo12:.LBB1_5
; CHECK-NEXT: br [[SPILL_REGISTER]]
; CHECK-NEXT: b .LBB1_5
; CHECK-NEXT: .LBB1_1: // %iftrue
; CHECK-NEXT: //APP
; CHECK-NEXT: .zero 2048
Expand Down Expand Up @@ -137,23 +133,28 @@ iffalse:

define void @relax_b_x16_taken() {
; CHECK-LABEL: relax_b_x16_taken: // @relax_b_x16_taken
; COM: Pre-commit to record the behavior of relaxing an unconditional
; COM: branch across which x16 is taken.
; COM: Since the source of the out-of-range branch is hot and x16 is
; COM: taken, it makes sense to spill x16 and let the linker insert
; COM: fixup code for this branch rather than inflating the hot code
; COM: size by eagerly relaxing the unconditional branch.
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: //APP
; CHECK-NEXT: mov x16, #1
; CHECK-NEXT: //NO_APP
; CHECK-NEXT: cbnz x16, .LBB2_1
; CHECK-NEXT: // %bb.3: // %entry
; CHECK-NEXT: adrp [[SCAVENGED_REGISTER2:x[0-9]+]], .LBB2_2
; CHECK-NEXT: add [[SCAVENGED_REGISTER2]], [[SCAVENGED_REGISTER2]], :lo12:.LBB2_2
; CHECK-NEXT: br [[SCAVENGED_REGISTER2]]
; CHECK-NEXT: str [[SPILL_REGISTER]], [sp,
; CHECK-SAME: -16]!
; CHECK-NEXT: b .LBB2_4
; CHECK-NEXT: .LBB2_1: // %iftrue
; CHECK-NEXT: //APP
; CHECK-NEXT: .zero 2048
; CHECK-NEXT: //NO_APP
; CHECK-NEXT: ret
; CHECK-NEXT: .LBB2_2: // %iffalse
; CHECK-NEXT: .LBB2_4: // %iffalse
; CHECK-NEXT: ldr [[SPILL_REGISTER]], [sp],
; CHECK-SAME: 16
; CHECK-NEXT: // %bb.2: // %iffalse
; CHECK-NEXT: //APP
; CHECK-NEXT: // reg use x16
; CHECK-NEXT: //NO_APP
Expand All @@ -174,4 +175,4 @@ iffalse:
}

declare i32 @bar()
declare i32 @baz()
declare i32 @baz()
71 changes: 41 additions & 30 deletions llvm/test/CodeGen/AArch64/branch-relax-cross-section.mir
Expand Up @@ -384,12 +384,19 @@ body: |
; INDIRECT: TBNZW
; INDIRECT-SAME: %bb.2
; INDIRECT: [[TRAMP2]]
; INDIRECT-NEXT: successors: %bb.3
; INDIRECT-NEXT: successors: %bb.6
; INDIRECT: bb.2.end:
; INDIRECT: TCRETURNdi
; INDIRECT: [[TRAMP1]].entry:
; INDIRECT: successors: %bb.3
; INDIRECT-NOT: bbsections Cold
; INDIRECT-NEXT: successors: %[[TRAMP1_SPILL:bb.[0-9]+]]
; INDIRECT: [[TRAMP1_SPILL]].entry:
; INDIRECT-NEXT: successors: %[[TRAMP1_RESTORE:bb.[0-9]+]]
; INDIRECT: early-clobber $sp = STRXpre $[[SPILL_REGISTER:x[0-9]+]], $sp, -16
; INDIRECT-NEXT: B %[[TRAMP1_RESTORE:bb.[0-9]+]]
; INDIRECT: [[TRAMP1_RESTORE]].cold (bbsections Cold):
; INDIRECT-NEXT: successors: %bb.3
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: early-clobber $sp, $[[SPILL_REGISTER]] = LDRXpost $sp, 16
; INDIRECT: bb.3.cold (bbsections Cold):
; INDIRECT: TCRETURNdi
Expand Down Expand Up @@ -433,26 +440,30 @@ machineFunctionInfo:
hasRedZone: false
body: |
; INDIRECT-LABEL: name: x16_used_cold_to_hot
; COM: Pre-commit to record the behavior of relaxing a "cold-to-hot"
; COM: unconditional branch across which x16 is taken but there is
; COM: still a free register.
; COM: Check that unconditional branches from the cold section to
; COM: the hot section manually insert indirect branches if x16
; COM: isn't available but there is still a free register.
; INDIRECT: bb.0.entry:
; INDIRECT-NEXT: successors: %bb.1
; INDIRECT-SAME: , %bb.3
; INDIRECT: TBZW killed renamable $w8, 0, %bb.1
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: bb.3.entry:
; INDIRECT-NEXT: successors: %bb.2
; INDIRECT-NEXT: successors: %bb.4
; INDIRECT-NEXT: liveins: $x16
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: $[[SCAVENGED_REGISTER3:x[0-9]+]] = ADRP target-flags(aarch64-page) <mcsymbol x16_used_cold_to_hot.cold>
; INDIRECT-NEXT: $[[SCAVENGED_REGISTER3]] = ADDXri $[[SCAVENGED_REGISTER3]], target-flags(aarch64-pageoff, aarch64-nc) <mcsymbol x16_used_cold_to_hot.cold>, 0
; INDIRECT-NEXT: BR $[[SCAVENGED_REGISTER3]]
; INDIRECT-NEXT: early-clobber $sp = STRXpre $[[SPILL_REGISTER]], $sp, -16
; INDIRECT-NEXT: B %bb.4
; INDIRECT: bb.1.hot:
; INDIRECT-NEXT: liveins: $x16
; INDIRECT: killed $x16
; INDIRECT: RET undef $lr
; INDIRECT: bb.2.cold (bbsections Cold):
; INDIRECT: bb.4.cold (bbsections Cold):
; INDIRECT-NEXT: successors: %bb.2
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: early-clobber $sp, $[[SPILL_REGISTER]] = LDRXpost $sp, 16
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: bb.2.cold (bbsections Cold):
; INDIRECT-NEXT: successors: %bb.5
; INDIRECT-NEXT: liveins: $x16
; INDIRECT-NEXT: {{ $}}
Expand All @@ -462,9 +473,9 @@ body: |
; INDIRECT-NEXT: successors: %bb.1
; INDIRECT-NEXT: liveins: $x16
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: $[[SCAVENGED_REGISTER4:x[0-9]+]] = ADRP target-flags(aarch64-page) <mcsymbol .LBB5_1>
; INDIRECT-NEXT: $[[SCAVENGED_REGISTER4]] = ADDXri $[[SCAVENGED_REGISTER4]], target-flags(aarch64-pageoff, aarch64-nc) <mcsymbol .LBB5_1>, 0
; INDIRECT-NEXT: BR $[[SCAVENGED_REGISTER4]]
; INDIRECT-NEXT: $[[SCAVENGED_REGISTER:x[0-9]+]] = ADRP target-flags(aarch64-page) <mcsymbol .LBB5_1>
; INDIRECT-NEXT: $[[SCAVENGED_REGISTER]] = ADDXri $[[SCAVENGED_REGISTER]], target-flags(aarch64-pageoff, aarch64-nc) <mcsymbol .LBB5_1>, 0
; INDIRECT-NEXT: BR $[[SCAVENGED_REGISTER]]
bb.0.entry:
successors: %bb.1, %bb.2
Expand Down Expand Up @@ -532,8 +543,10 @@ machineFunctionInfo:
hasRedZone: false
body: |
; INDIRECT-LABEL: name: all_used_cold_to_hot
; COM: Pre-commit to record the behavior of relaxing a "cold-to-hot"
; COM: unconditional branch across which there are no free registers.
; COM: Check that unconditional branches from the cold section to
; COM: the hot section spill x16 and defer indirect branch
; COM: insertion to the linker if there are no free general-purpose
; COM: registers.
; INDIRECT: bb.0.entry:
; INDIRECT-NEXT: successors: %bb.3
; INDIRECT-NEXT: liveins: $fp, $x27, $x28, $x25, $x26, $x23, $x24, $x21, $x22, $x19, $x20
Expand All @@ -545,17 +558,7 @@ body: |
; INDIRECT-SAME: $x9, $x10, $x11, $x12, $x13, $x14, $x15, $x17, $x18, $x19,
; INDIRECT-SAME: $x20, $x21, $x22, $x23, $x24, $x25, $x26, $x27, $x28
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: $[[SCAVENGED_REGISTER5:x[0-9]+]] = ADRP target-flags(aarch64-page) <mcsymbol all_used_cold_to_hot.cold>
; INDIRECT-NEXT: $[[SCAVENGED_REGISTER5]] = ADDXri $[[SCAVENGED_REGISTER5]], target-flags(aarch64-pageoff, aarch64-nc) <mcsymbol all_used_cold_to_hot.cold>, 0
; INDIRECT-NEXT: BR $[[SCAVENGED_REGISTER5]]
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: bb.6.exit:
; INDIRECT-NEXT: successors: %bb.1
; INDIRECT-NEXT: liveins: $x0, $x1, $x2, $x3, $x4, $x5, $x6, $x7, $x8, $x9,
; INDIRECT-SAME: $x10, $x11, $x12, $x13, $x14, $x15, $x17, $x18, $x19, $x20,
; INDIRECT-SAME: $x21, $x22, $x23, $x24, $x25, $x26, $x27, $x28
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: early-clobber $sp, $[[SPILL_REGISTER:x[0-9]+]] = LDRXpost $sp, 16
; INDIRECT-NEXT: B %bb.2
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: bb.1.exit:
; INDIRECT-NEXT: liveins: $x0, $x1, $x2, $x3, $x4, $x5, $x6, $x7, $x8, $x9,
Expand All @@ -565,6 +568,16 @@ body: |
; INDIRECT-COUNT-30: INLINEASM &"# reg use $0", 1 /* sideeffect attdialect */, 9 /* reguse */, killed
; INDIRECT: RET undef $lr
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: bb.6.exit:
; INDIRECT-NEXT: successors: %bb.7(0x80000000)
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: early-clobber $sp, $[[SPILL_REGISTER]] = LDRXpost $sp, 16
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: bb.7.exit:
; INDIRECT-NEXT: successors: %bb.1(0x80000000)
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: B %bb.1
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: bb.2.cold (bbsections Cold):
; INDIRECT-NEXT: successors: %bb.5
; INDIRECT-NEXT: liveins: $x0, $x1, $x2, $x3, $x4, $x5, $x6, $x7, $x8, $x9,
Expand All @@ -580,9 +593,7 @@ body: |
; INDIRECT-SAME: $x19, $x20, $x21, $x22, $x23, $x24, $x25, $x26, $x27, $x28
; INDIRECT-NEXT: {{ $}}
; INDIRECT-NEXT: early-clobber $sp = STRXpre $[[SPILL_REGISTER]], $sp, -16
; INDIRECT-NEXT: $[[SPILL_REGISTER]] = ADRP target-flags(aarch64-page) <mcsymbol .LBB6_6>
; INDIRECT-NEXT: $[[SPILL_REGISTER]] = ADDXri $[[SPILL_REGISTER]], target-flags(aarch64-pageoff, aarch64-nc) <mcsymbol .LBB6_6>, 0
; INDIRECT-NEXT: BR $[[SPILL_REGISTER]]
; INDIRECT-NEXT: B %bb.6
bb.0.entry:
successors: %bb.2
Expand Down

0 comments on commit 866ae69

Please sign in to comment.