-
Notifications
You must be signed in to change notification settings - Fork 15.3k
[VPlan] Sink predicated stores with complementary masks. #168771
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This commit implements hoisting of predicated loads that are executed on both paths with complementary predicates (P and NOT P). When such loads access the same address, they can be hoisted to the loop entry as a single unpredicated load, eliminating branching overhead. Key features: - Uses SCEV to group loads by address, handling different GEP instructions that compute the same address - Checks for complementary masks (P and NOT P) - Clones address computations when needed to maintain SSA form - Hoists as unpredicated VPReplicateRecipe (no widening yet) Simp
|
@llvm/pr-subscribers-vectorizers @llvm/pr-subscribers-llvm-transforms Author: Florian Hahn (fhahn) ChangesExtend the logic to hoist predicated loads The patch refactors some of the existing logic for legality checks to be With respect to the legality checks, for sinking stores the code also Patch is 83.70 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/168771.diff 5 Files Affected:
diff --git a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
index 267be2e243fc3..96294727514e2 100644
--- a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
+++ b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
@@ -8317,6 +8317,8 @@ void LoopVectorizationPlanner::buildVPlansWithVPRecipes(ElementCount MinVF,
if (auto Plan = tryToBuildVPlanWithVPRecipes(
std::unique_ptr<VPlan>(VPlan0->duplicate()), SubRange, &LVer)) {
// Now optimize the initial VPlan.
+ VPlanTransforms::hoistPredicatedLoads(*Plan, *PSE.getSE(), OrigLoop);
+ VPlanTransforms::sinkPredicatedStores(*Plan, *PSE.getSE(), OrigLoop);
VPlanTransforms::runPass(VPlanTransforms::truncateToMinimalBitwidths,
*Plan, CM.getMinimalBitwidths());
VPlanTransforms::runPass(VPlanTransforms::optimize, *Plan);
diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
index 25557f1d5d651..a8bedb623c4a2 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
@@ -42,6 +42,8 @@
#include "llvm/Support/TypeSize.h"
#include "llvm/Transforms/Utils/ScalarEvolutionExpander.h"
+#define DEBUG_TYPE "loop-vectorize"
+
using namespace llvm;
using namespace VPlanPatternMatch;
@@ -3974,6 +3976,296 @@ void VPlanTransforms::hoistInvariantLoads(VPlan &Plan) {
}
}
+// Collect common metadata from a group of replicate recipes by intersecting
+// metadata from all recipes in the group.
+static VPIRMetadata getCommonMetadata(ArrayRef<VPReplicateRecipe *> Recipes) {
+ VPIRMetadata CommonMetadata = *Recipes.front();
+ for (VPReplicateRecipe *Recipe : drop_begin(Recipes))
+ CommonMetadata.intersect(*Recipe);
+ return CommonMetadata;
+}
+
+// Helper to check if we can prove no aliasing using scoped noalias metadata.
+static bool canProveNoAlias(const AAMDNodes &AA1, const AAMDNodes &AA2) {
+ return AA1.Scope && AA2.NoAlias &&
+ !ScopedNoAliasAAResult::mayAliasInScopes(AA1.Scope, AA2.NoAlias);
+}
+
+// Check if a memory operation doesn't alias with memory operations in blocks
+// between FirstBB and LastBB using scoped noalias metadata.
+// For load hoisting, we only check writes in one direction.
+// For store sinking, we check both reads and writes bidirectionally.
+static bool canHoistOrSinkWithNoAliasCheck(
+ const MemoryLocation &MemLoc, VPBasicBlock *FirstBB, VPBasicBlock *LastBB,
+ bool CheckReads,
+ const SmallPtrSetImpl<VPRecipeBase *> *ExcludeRecipes = nullptr) {
+ if (!MemLoc.AATags.Scope)
+ return false;
+
+ const AAMDNodes &MemAA = MemLoc.AATags;
+
+ for (VPBlockBase *Block = FirstBB; Block;
+ Block = Block->getSingleSuccessor()) {
+ assert(Block->getNumSuccessors() <= 1 &&
+ "Expected at most one successor in block chain");
+ auto *VPBB = cast<VPBasicBlock>(Block);
+ for (VPRecipeBase &R : *VPBB) {
+ if (ExcludeRecipes && ExcludeRecipes->contains(&R))
+ continue;
+
+ // Skip recipes that don't need checking.
+ if (!R.mayWriteToMemory() && !(CheckReads && R.mayReadFromMemory()))
+ continue;
+
+ auto Loc = vputils::getMemoryLocation(R);
+ if (!Loc)
+ // Conservatively assume aliasing for memory operations without
+ // location. We already filtered by
+ // mayWriteToMemory()/mayReadFromMemory() above.
+ return false;
+
+ // Check for aliasing using scoped noalias metadata.
+ // For store sinking with CheckReads, we can prove no aliasing
+ // bidirectionally (either direction suffices).
+ if (CheckReads) {
+ if (canProveNoAlias(Loc->AATags, MemAA) ||
+ canProveNoAlias(MemAA, Loc->AATags))
+ continue;
+ }
+
+ // Check if the memory operations may alias in the standard direction.
+ if (ScopedNoAliasAAResult::mayAliasInScopes(MemAA.Scope,
+ Loc->AATags.NoAlias))
+ return false;
+ }
+
+ if (Block == LastBB)
+ break;
+ }
+ return true;
+}
+
+template <unsigned Opcode>
+static SmallVector<SmallVector<VPReplicateRecipe *, 4>>
+collectComplementaryPredicatedMemOps(VPlan &Plan, ScalarEvolution &SE,
+ const Loop *L) {
+ static_assert(Opcode == Instruction::Load || Opcode == Instruction::Store,
+ "Only Load and Store opcodes supported");
+ constexpr bool IsLoad = (Opcode == Instruction::Load);
+ VPRegionBlock *LoopRegion = Plan.getVectorLoopRegion();
+ VPTypeAnalysis TypeInfo(Plan);
+
+ // Group predicated operations by their address SCEV.
+ MapVector<const SCEV *, SmallVector<VPReplicateRecipe *>> RecipesByAddress;
+ for (VPBlockBase *Block : vp_depth_first_shallow(LoopRegion->getEntry())) {
+ auto *VPBB = cast<VPBasicBlock>(Block);
+ for (VPRecipeBase &R : *VPBB) {
+ auto *RepR = dyn_cast<VPReplicateRecipe>(&R);
+ if (!RepR || RepR->getOpcode() != Opcode || !RepR->isPredicated())
+ continue;
+
+ // For loads, operand 0 is address; for stores, operand 1 is address.
+ VPValue *Addr = RepR->getOperand(IsLoad ? 0 : 1);
+ const SCEV *AddrSCEV = vputils::getSCEVExprForVPValue(Addr, SE, L);
+ if (!isa<SCEVCouldNotCompute>(AddrSCEV))
+ RecipesByAddress[AddrSCEV].push_back(RepR);
+ }
+ }
+
+ // For each address, collect operations with the same or complementary masks.
+ SmallVector<SmallVector<VPReplicateRecipe *, 4>> AllGroups;
+ for (auto &[Addr, Recipes] : RecipesByAddress) {
+ if (Recipes.size() < 2)
+ continue;
+
+ // Collect groups with the same or complementary masks.
+ for (VPReplicateRecipe *&RecipeI : Recipes) {
+ if (!RecipeI)
+ continue;
+
+ VPValue *MaskI = RecipeI->getMask();
+ Type *TypeI =
+ TypeInfo.inferScalarType(IsLoad ? RecipeI : RecipeI->getOperand(0));
+ SmallVector<VPReplicateRecipe *, 4> Group;
+ Group.push_back(RecipeI);
+ RecipeI = nullptr;
+
+ // Find all operations with the same or complementary masks.
+ bool HasComplementaryMask = false;
+ for (VPReplicateRecipe *&RecipeJ : Recipes) {
+ if (!RecipeJ)
+ continue;
+
+ VPValue *MaskJ = RecipeJ->getMask();
+ Type *TypeJ =
+ TypeInfo.inferScalarType(IsLoad ? RecipeJ : RecipeJ->getOperand(0));
+ if (TypeI == TypeJ) {
+ // Check if any operation in the group has a complementary mask with
+ // another, that is M1 == NOT(M2) or M2 == NOT(M1).
+ HasComplementaryMask |= match(MaskI, m_Not(m_Specific(MaskJ))) ||
+ match(MaskJ, m_Not(m_Specific(MaskI)));
+ Group.push_back(RecipeJ);
+ RecipeJ = nullptr;
+ }
+ }
+
+ if (HasComplementaryMask) {
+ assert(Group.size() >= 2 && "must have at least 2 entries");
+ AllGroups.push_back(std::move(Group));
+ }
+ }
+ }
+
+ return AllGroups;
+}
+
+void VPlanTransforms::hoistPredicatedLoads(VPlan &Plan, ScalarEvolution &SE,
+ const Loop *L) {
+ auto Groups =
+ collectComplementaryPredicatedMemOps<Instruction::Load>(Plan, SE, L);
+ if (Groups.empty())
+ return;
+
+ VPDominatorTree VPDT(Plan);
+
+ // Process each group of loads.
+ for (auto &Group : Groups) {
+ // Sort loads by dominance order, with earliest (most dominating) first.
+ sort(Group, [&VPDT](VPReplicateRecipe *A, VPReplicateRecipe *B) {
+ return VPDT.properlyDominates(A, B);
+ });
+
+ // Try to use the earliest (most dominating) load to replace all others.
+ VPReplicateRecipe *EarliestLoad = Group[0];
+ VPBasicBlock *FirstBB = EarliestLoad->getParent();
+ VPBasicBlock *LastBB = Group.back()->getParent();
+
+ // Check that the load doesn't alias with stores between first and last.
+ auto LoadLoc = vputils::getMemoryLocation(*EarliestLoad);
+ if (!LoadLoc ||
+ !canHoistOrSinkWithNoAliasCheck(*LoadLoc, FirstBB, LastBB,
+ /*CheckReads=*/false))
+ continue;
+
+ // Collect common metadata from all loads in the group.
+ VPIRMetadata CommonMetadata = getCommonMetadata(Group);
+
+ // Create an unpredicated version of the earliest load with common
+ // metadata.
+ auto *UnpredicatedLoad = new VPReplicateRecipe(
+ EarliestLoad->getUnderlyingInstr(), {EarliestLoad->getOperand(0)},
+ /*IsSingleScalar=*/false, /*Mask=*/nullptr, *EarliestLoad,
+ CommonMetadata);
+
+ UnpredicatedLoad->insertBefore(EarliestLoad);
+
+ // Replace all loads in the group with the unpredicated load.
+ for (VPReplicateRecipe *Load : Group) {
+ Load->replaceAllUsesWith(UnpredicatedLoad);
+ Load->eraseFromParent();
+ }
+ }
+}
+
+static bool canSinkStoreWithNoAliasCheck(
+ VPReplicateRecipe *Store, ArrayRef<VPReplicateRecipe *> StoresToSink,
+ const SmallPtrSetImpl<VPRecipeBase *> *AlreadySunkStores = nullptr) {
+ auto StoreLoc = vputils::getMemoryLocation(*Store);
+ if (!StoreLoc)
+ return false;
+
+ SmallPtrSet<VPRecipeBase *, 4> StoresToSinkSet(StoresToSink.begin(),
+ StoresToSink.end());
+ if (AlreadySunkStores)
+ StoresToSinkSet.insert(AlreadySunkStores->begin(),
+ AlreadySunkStores->end());
+
+ VPBasicBlock *FirstBB = StoresToSink.front()->getParent();
+ VPBasicBlock *LastBB = StoresToSink.back()->getParent();
+
+ if (StoreLoc->AATags.Scope)
+ return canHoistOrSinkWithNoAliasCheck(*StoreLoc, FirstBB, LastBB,
+ /*CheckReads=*/true,
+ &StoresToSinkSet);
+
+ // Without alias scope metadata, we conservatively require no memory
+ // operations between the stores being sunk.
+ for (VPBlockBase *Block = FirstBB; Block;
+ Block = Block->getSingleSuccessor()) {
+ auto *VPBB = cast<VPBasicBlock>(Block);
+ for (VPRecipeBase &R : *VPBB) {
+ if (StoresToSinkSet.contains(&R))
+ continue;
+
+ if (R.mayReadFromMemory() || R.mayWriteToMemory())
+ return false;
+ }
+
+ if (Block == LastBB)
+ break;
+ }
+
+ return true;
+}
+
+void VPlanTransforms::sinkPredicatedStores(VPlan &Plan, ScalarEvolution &SE,
+ const Loop *L) {
+ auto Groups =
+ collectComplementaryPredicatedMemOps<Instruction::Store>(Plan, SE, L);
+
+ if (Groups.empty())
+ return;
+
+ VPDominatorTree VPDT(Plan);
+
+ // Track stores from all groups that have been successfully sunk to exclude
+ // them from alias checks for subsequent groups.
+ SmallPtrSet<VPRecipeBase *, 16> AlreadySunkStores;
+
+ for (auto &Group : Groups) {
+ sort(Group, [&VPDT](VPReplicateRecipe *A, VPReplicateRecipe *B) {
+ return VPDT.properlyDominates(A, B);
+ });
+
+ if (!canSinkStoreWithNoAliasCheck(Group[0], Group, &AlreadySunkStores))
+ continue;
+
+ // Use the last (most dominated) store's location for the unconditional
+ // store.
+ VPReplicateRecipe *LastStore = Group.back();
+ VPBasicBlock *InsertBB = LastStore->getParent();
+
+ // Collect common alias metadata from all stores in the group.
+ VPIRMetadata CommonMetadata = getCommonMetadata(Group);
+
+ // Build select chain for stored values.
+ VPValue *SelectedValue = Group[0]->getOperand(0);
+ VPBuilder Builder(InsertBB, LastStore->getIterator());
+
+ for (unsigned I = 1; I < Group.size(); ++I) {
+ VPValue *Mask = Group[I]->getMask();
+ VPValue *Value = Group[I]->getOperand(0);
+ SelectedValue = Builder.createSelect(Mask, Value, SelectedValue,
+ Group[I]->getDebugLoc());
+ }
+
+ // Create unconditional store with selected value and common metadata.
+ VPValue *AddrVPValue = Group[0]->getOperand(1);
+ SmallVector<VPValue *> Operands = {SelectedValue, AddrVPValue};
+ auto *SI = cast<StoreInst>(Group[0]->getUnderlyingInstr());
+ auto *UnpredicatedStore =
+ new VPReplicateRecipe(SI, Operands, /*IsSingleScalar=*/false,
+ /*Mask=*/nullptr, *LastStore, CommonMetadata);
+ UnpredicatedStore->insertBefore(*InsertBB, LastStore->getIterator());
+
+ // Track and remove all predicated stores from the group.
+ for (VPReplicateRecipe *Store : Group) {
+ AlreadySunkStores.insert(Store);
+ Store->eraseFromParent();
+ }
+ }
+}
+
void VPlanTransforms::materializeConstantVectorTripCount(
VPlan &Plan, ElementCount BestVF, unsigned BestUF,
PredicatedScalarEvolution &PSE) {
diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.h b/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
index 708ea4185e1cb..f5f415f379915 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
+++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
@@ -314,6 +314,19 @@ struct VPlanTransforms {
/// plan using noalias metadata.
static void hoistInvariantLoads(VPlan &Plan);
+ /// Hoist predicated loads from the same address to the loop entry block, if
+ /// they are guaranteed to execute on both paths (i.e., in replicate regions
+ /// with complementary masks P and NOT P).
+ static void hoistPredicatedLoads(VPlan &Plan, ScalarEvolution &SE,
+ const Loop *L);
+
+ /// Sink predicated stores to the same address with complementary predicates
+ /// (P and NOT P) to an unconditional store with select recipes for the
+ /// stored values. This eliminates branching overhead when all paths
+ /// unconditionally store to the same location.
+ static void sinkPredicatedStores(VPlan &Plan, ScalarEvolution &SE,
+ const Loop *L);
+
// Materialize vector trip counts for constants early if it can simply be
// computed as (Original TC / VF * UF) * VF * UF.
static void
diff --git a/llvm/test/Transforms/LoopVectorize/hoist-predicated-loads-with-predicated-stores.ll b/llvm/test/Transforms/LoopVectorize/hoist-predicated-loads-with-predicated-stores.ll
index ac767c68e0b25..c065fe96ab26c 100644
--- a/llvm/test/Transforms/LoopVectorize/hoist-predicated-loads-with-predicated-stores.ll
+++ b/llvm/test/Transforms/LoopVectorize/hoist-predicated-loads-with-predicated-stores.ll
@@ -21,83 +21,27 @@ define void @test_stores_noalias_via_rt_checks_after_loads(ptr %dst, ptr %src, p
; CHECK: [[VECTOR_PH]]:
; CHECK-NEXT: br label %[[VECTOR_BODY:.*]]
; CHECK: [[VECTOR_BODY]]:
-; CHECK-NEXT: [[INDEX:%.*]] = phi i32 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[PRED_STORE_CONTINUE17:.*]] ]
+; CHECK-NEXT: [[INDEX:%.*]] = phi i32 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[VECTOR_BODY]] ]
; CHECK-NEXT: [[TMP4:%.*]] = add i32 [[INDEX]], 0
; CHECK-NEXT: [[TMP5:%.*]] = add i32 [[INDEX]], 1
; CHECK-NEXT: [[TMP6:%.*]] = getelementptr inbounds i32, ptr [[COND]], i32 [[TMP4]]
; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <2 x i32>, ptr [[TMP6]], align 4, !alias.scope [[META0:![0-9]+]]
; CHECK-NEXT: [[TMP7:%.*]] = icmp ule <2 x i32> [[WIDE_LOAD]], splat (i32 11)
-; CHECK-NEXT: [[TMP8:%.*]] = xor <2 x i1> [[TMP7]], splat (i1 true)
-; CHECK-NEXT: [[TMP9:%.*]] = extractelement <2 x i1> [[TMP8]], i32 0
-; CHECK-NEXT: br i1 [[TMP9]], label %[[PRED_LOAD_IF:.*]], label %[[PRED_LOAD_CONTINUE:.*]]
-; CHECK: [[PRED_LOAD_IF]]:
; CHECK-NEXT: [[TMP10:%.*]] = getelementptr inbounds i32, ptr [[SRC]], i32 [[TMP4]]
-; CHECK-NEXT: [[TMP11:%.*]] = load i32, ptr [[TMP10]], align 4, !alias.scope [[META3:![0-9]+]]
-; CHECK-NEXT: [[TMP12:%.*]] = insertelement <2 x i32> poison, i32 [[TMP11]], i32 0
-; CHECK-NEXT: br label %[[PRED_LOAD_CONTINUE]]
-; CHECK: [[PRED_LOAD_CONTINUE]]:
-; CHECK-NEXT: [[TMP13:%.*]] = phi <2 x i32> [ poison, %[[VECTOR_BODY]] ], [ [[TMP12]], %[[PRED_LOAD_IF]] ]
-; CHECK-NEXT: [[TMP14:%.*]] = extractelement <2 x i1> [[TMP8]], i32 1
-; CHECK-NEXT: br i1 [[TMP14]], label %[[PRED_LOAD_IF6:.*]], label %[[PRED_LOAD_CONTINUE7:.*]]
-; CHECK: [[PRED_LOAD_IF6]]:
; CHECK-NEXT: [[TMP15:%.*]] = getelementptr inbounds i32, ptr [[SRC]], i32 [[TMP5]]
+; CHECK-NEXT: [[TMP9:%.*]] = load i32, ptr [[TMP10]], align 4, !alias.scope [[META3:![0-9]+]]
; CHECK-NEXT: [[TMP16:%.*]] = load i32, ptr [[TMP15]], align 4, !alias.scope [[META3]]
+; CHECK-NEXT: [[TMP13:%.*]] = insertelement <2 x i32> poison, i32 [[TMP9]], i32 0
; CHECK-NEXT: [[TMP17:%.*]] = insertelement <2 x i32> [[TMP13]], i32 [[TMP16]], i32 1
-; CHECK-NEXT: br label %[[PRED_LOAD_CONTINUE7]]
-; CHECK: [[PRED_LOAD_CONTINUE7]]:
-; CHECK-NEXT: [[TMP18:%.*]] = phi <2 x i32> [ [[TMP13]], %[[PRED_LOAD_CONTINUE]] ], [ [[TMP17]], %[[PRED_LOAD_IF6]] ]
-; CHECK-NEXT: [[TMP19:%.*]] = sub <2 x i32> [[TMP18]], splat (i32 5)
-; CHECK-NEXT: [[TMP20:%.*]] = extractelement <2 x i1> [[TMP8]], i32 0
-; CHECK-NEXT: br i1 [[TMP20]], label %[[PRED_STORE_IF:.*]], label %[[PRED_STORE_CONTINUE:.*]]
-; CHECK: [[PRED_STORE_IF]]:
+; CHECK-NEXT: [[TMP19:%.*]] = sub <2 x i32> [[TMP17]], splat (i32 5)
; CHECK-NEXT: [[TMP21:%.*]] = getelementptr inbounds i32, ptr [[DST]], i32 [[TMP4]]
-; CHECK-NEXT: [[TMP22:%.*]] = extractelement <2 x i32> [[TMP19]], i32 0
-; CHECK-NEXT: store i32 [[TMP22]], ptr [[TMP21]], align 4, !alias.scope [[META5:![0-9]+]], !noalias [[META7:![0-9]+]]
-; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE]]
-; CHECK: [[PRED_STORE_CONTINUE]]:
-; CHECK-NEXT: [[TMP23:%.*]] = extractelement <2 x i1> [[TMP8]], i32 1
-; CHECK-NEXT: br i1 [[TMP23]], label %[[PRED_STORE_IF8:.*]], label %[[PRED_STORE_CONTINUE9:.*]]
-; CHECK: [[PRED_STORE_IF8]]:
; CHECK-NEXT: [[TMP24:%.*]] = getelementptr inbounds i32, ptr [[DST]], i32 [[TMP5]]
-; CHECK-NEXT: [[TMP25:%.*]] = extractelement <2 x i32> [[TMP19]], i32 1
-; CHECK-NEXT: store i32 [[TMP25]], ptr [[TMP24]], align 4, !alias.scope [[META5]], !noalias [[META7]]
-; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE9]]
-; CHECK: [[PRED_STORE_CONTINUE9]]:
-; CHECK-NEXT: [[TMP26:%.*]] = extractelement <2 x i1> [[TMP7]], i32 0
-; CHECK-NEXT: br i1 [[TMP26]], label %[[PRED_LOAD_IF10:.*]], label %[[PRED_LOAD_CONTINUE11:.*]]
-; CHECK: [[PRED_LOAD_IF10]]:
-; CHECK-NEXT: [[TMP27:%.*]] = getelementptr inbounds i32, ptr [[SRC]], i32 [[TMP4]]
-; CHECK-NEXT: [[TMP28:%.*]] = load i32, ptr [[TMP27]], align 4, !alias.scope [[META3]]
-; CHECK-NEXT: [[TMP29:%.*]] = insertelement <2 x i32> poison, i32 [[TMP28]], i32 0
-; CHECK-NEXT: br label %[[PRED_LOAD_CONTINUE11]]
-; CHECK: [[PRED_LOAD_CONTINUE11]]:
-; CHECK-NEXT: [[TMP30:%.*]] = phi <2 x i32> [ poison, %[[PRED_STORE_CONTINUE9]] ], [ [[TMP29]], %[[PRED_LOAD_IF10]] ]
-; CHECK-NEXT: [[TMP31:%.*]] = extractelement <2 x i1> [[TMP7]], i32 1
-; CHECK-NEXT: br i1 [[TMP31]], label %[[PRED_LOAD_IF12:.*]], label %[[PRED_LOAD_CONTINUE13:.*]]
-; CHECK: [[PRED_LOAD_IF12]]:
-; CHECK-NEXT: [[TMP32:%.*]] = getelementptr inbounds i32, ptr [[SRC]], i32 [[TMP5]]
-; CHECK-NEXT: [[TMP33:%.*]] = load i32, ptr [[TMP32]], align 4, !alias.scope [[META3]]
-; CHECK-NEXT: [[TMP34:%.*]] = insertelement <2 x i32> [[TMP30]], i32 [[TMP33]], i32 1
-; CHECK-NEXT: br label %[[PRED_LOAD_CONTINUE13]]
-; CHECK: [[PRED_LOAD_CONTINUE13]]:
-; CHECK-NEXT: [[TMP35:%.*]] = phi <2 x i32> [ [[TMP30]], %[[PRED_LOAD_CONTINUE11]] ], [ [[TMP34]], %[[PRED_LOAD_IF12]] ]
-; CHECK-NEXT: [[TMP36:%.*]] = add <2 x i32> [[TMP35]], splat (i32 10)
-; CHECK-NEXT: [[TMP37:%.*]] = extractelement <2 x i1> [[TMP7]], i32 0
-; CHECK-NEXT: br i1 [[TMP37]], label %[[PRED_STORE_IF14:.*]], label %[[PRED_STORE_CONTINUE15:.*]]
-; CHECK: [[PRED_STORE_IF14]]:
-; CHECK-NEXT: [[TMP38:%.*]] = getelementptr inbounds i32, ptr [[DST]], i32 [[TMP4]]
-; CHECK-NEXT: [[TMP39:%.*]] = extractelement <2 x i32> [[TMP36]], i32 0
-; CHECK-NEXT: store i32 [[TMP39]], ptr [[TMP38]], align 4, !alias.scope [[META5]], !noalias [[META7]]
-; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE15]]
-; CHECK: [[PRED_STORE_CONTINUE15]]:
-; CHECK-NEXT: [[TMP40:%.*]] = extractelement <2 x i1> [[TMP7]], i32 1
-; CHECK-NEXT: br i1 [[TMP40]], label %[[PRED_STORE_IF16:.*]], label %[[PRED_STORE_CONTINUE17]]
-; CHECK: [[PRED_STORE_IF16]]:
-; CHECK-NEXT: [[TMP41:%.*]] = getelementptr inbounds i32, ptr [[DS...
[truncated]
|
|
✅ With the latest revision this PR passed the C/C++ code formatter. |
Extend the logic to hoist predicated loads (llvm#168373) to sink predicated stores with complementary masks in a similar fashion. The patch refactors some of the existing logic for legality checks to be shared between hosting and sinking, and adds a new sinking transform on top. With respect to the legality checks, for sinking stores the code also checks if there are any aliasing stores that may alias, not only loads.
d0c90ca to
d5ec62e
Compare
🐧 Linux x64 Test Results
|
Extend the logic to hoist predicated loads
(#168373) to sink predicated
stores with complementary masks in a similar fashion.
The patch refactors some of the existing logic for legality checks to be
shared between hosting and sinking, and adds a new sinking transform on
top.
With respect to the legality checks, for sinking stores the code also
checks if there are any aliasing stores that may alias, not only loads.