Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NewGVN][1/3] Load coercion between load and store #68659

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

kmitropoulou
Copy link
Contributor

@kmitropoulou kmitropoulou commented Oct 10, 2023

Load coercion consists of two phases:

  1. Collection of the load instructions that can be optiimized with load coercion.
    We collect pairs of candidate load and its depending instructions. The candidate
    load is the laod that will be eliminated by the value that we will extract from
    the depending instruction. The reason that we can eliminate the candidate load is
    because its memory location overlaps with the memory location of the depending
    instruction.
    For example, in the following snippet, the candidate load is %V2 and the
    depending instruction is the store.
Beofre load coercion               After load coercion
BB1:                               BB1:
 store i32 100, ptr %P              store i32 100, ptr %P
 %V1 = ...                   =>     %V1 = ...
 %V2 = load i32, ptr %P             %V3 = add i32 %V1, 100
 %V3 = add i32 %V1, %V2
  1. Code generation for load coercion: This phase updatest the IR by eliminating
    the candidate load and by updating its uses.

This patch implements load coercion between a load candidate and a store
depending instruction. The follow-up patches implement load coercion support for
instructions that have live on entry definitions and MemoryPhi definitions.

@llvmbot
Copy link
Collaborator

llvmbot commented Oct 10, 2023

@llvm/pr-subscribers-llvm-transforms

Author: Konstantina Mitropoulou (kmitropoulou)

Changes

Load coercion consists of two phases:

  1. Collection of the load instructions that can be optiimized with load coercion.
    We collect pairs of candidate load and its depending instructions. The candidate
    load is the laod that will be eliminated by the value that we will extract from
    the depending instruction. The reason that we can eliminate the candidate load is
    because its memory location overlaps with the memory location of the depending
    instruction.
    For example, in the following snippet, the candidate load is %V2 and the
    depending instruction is the store.

Beofre load coercion After load coercion
BB1: BB1:
store i32 100, ptr %P store i32 100, ptr %P
%V1 = ... => %V1 = ...
%V2 = load i32, ptr %P %V3 = add i32 %V1, 100
%V3 = add i32 %V1, %V2

  1. Code generation for load coercion: This phase updatest the IR by eliminating
    the candidate load and by updating its uses.

This patch implements load coercion between a load candidate and a store
depending instruction. The follow-up patches implement load coercion support for
instructions that have live on entry definitions and MemoryPhi definitions.


Patch is 35.88 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/68659.diff

3 Files Affected:

  • (modified) llvm/lib/Transforms/Scalar/NewGVN.cpp (+439-39)
  • (added) llvm/test/Transforms/NewGVN/load_coercion_between_store_and_load.ll (+341)
  • (modified) llvm/test/Transforms/NewGVN/pr14166-xfail.ll (-1)
diff --git a/llvm/lib/Transforms/Scalar/NewGVN.cpp b/llvm/lib/Transforms/Scalar/NewGVN.cpp
index 19ac9526b5f88b6..f4c84bd747bb417 100644
--- a/llvm/lib/Transforms/Scalar/NewGVN.cpp
+++ b/llvm/lib/Transforms/Scalar/NewGVN.cpp
@@ -76,6 +76,7 @@
 #include "llvm/Analysis/InstructionSimplify.h"
 #include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/Analysis/MemorySSA.h"
+#include "llvm/Analysis/MemorySSAUpdater.h"
 #include "llvm/Analysis/TargetLibraryInfo.h"
 #include "llvm/Analysis/ValueTracking.h"
 #include "llvm/IR/Argument.h"
@@ -154,6 +155,10 @@ static cl::opt<bool> EnableStoreRefinement("enable-store-refinement",
 static cl::opt<bool> EnablePhiOfOps("enable-phi-of-ops", cl::init(true),
                                     cl::Hidden);
 
+// Enables load coercion for non-constant values.
+static cl::opt<bool> EnableLoadCoercion("enable-load-coercion", cl::init(true),
+                                        cl::Hidden);
+
 //===----------------------------------------------------------------------===//
 //                                GVN Pass
 //===----------------------------------------------------------------------===//
@@ -653,6 +658,16 @@ class NewGVN {
   // Deletion info.
   SmallPtrSet<Instruction *, 8> InstructionsToErase;
 
+  // Map candidate load to their depending instructions.
+  mutable std::map<Value *, DenseSet<std::pair<Instruction *, BasicBlock *>>>
+      LoadCoercion;
+
+  // Keep newly generated loads.
+  SmallVector<Instruction *, 2> NewLoadsInLoadCoercion;
+
+  // Keep newly generated instructions.
+  SmallVector<Instruction *, 2> NewlyGeneratedInsns;
+
 public:
   NewGVN(Function &F, DominatorTree *DT, AssumptionCache *AC,
          TargetLibraryInfo *TLI, AliasAnalysis *AA, MemorySSA *MSSA,
@@ -776,9 +791,9 @@ class NewGVN {
   ExprResult checkExprResults(Expression *, Instruction *, Value *) const;
   ExprResult performSymbolicEvaluation(Instruction *,
                                        SmallPtrSetImpl<Value *> &) const;
-  const Expression *performSymbolicLoadCoercion(Type *, Value *, LoadInst *,
-                                                Instruction *,
-                                                MemoryAccess *) const;
+  const Expression *createLoadExpAndUpdateMemUses(LoadInst *, Value *,
+                                                  MemoryAccess *,
+                                                  MemoryAccess *) const;
   const Expression *performSymbolicLoadEvaluation(Instruction *) const;
   const Expression *performSymbolicStoreEvaluation(Instruction *) const;
   ExprResult performSymbolicCallEvaluation(Instruction *) const;
@@ -853,6 +868,7 @@ class NewGVN {
   // Utilities.
   void cleanupTables();
   std::pair<unsigned, unsigned> assignDFSNumbers(BasicBlock *, unsigned);
+  unsigned updateDFSNumbers(unsigned);
   void updateProcessedCount(const Value *V);
   void verifyMemoryCongruency() const;
   void verifyIterationSettled(Function &F);
@@ -893,6 +909,37 @@ class NewGVN {
   // Debug counter info.  When verifying, we have to reset the value numbering
   // debug counter to the same state it started in to get the same results.
   int64_t StartingVNCounter = 0;
+
+  // The following functions are used in load coercion:
+  // Try to add the load along with the depending instruction(s) in
+  // LoadCoercion map.
+  bool tryAddLoadDepInsnIntoLoadCoercionMap(LoadInst *, Instruction *,
+                                            BasicBlock *) const;
+  // Collect the load instructions that can be optimized with load coercion.
+  // The filtering of the load instructions is based the type of their memory
+  // access.
+  bool performSymbolicLoadCoercionForNonConstantMemoryDef(LoadInst *,
+                                                          StoreInst *,
+                                                          MemoryAccess *) const;
+  const Expression *performSymbolicLoadCoercionForConstantMemoryDef(
+      Type *, Value *, LoadInst *, Instruction *, MemoryAccess *) const;
+  // Code generation for load coercion. Replaces the load with the right
+  // instruction or the right sequence of instructions.
+  bool implementLoadCoercion();
+  // Update MemorySSA with the load instructions that are emitted during load
+  // coercion.
+  void updateMemorySSA(Instruction *, Instruction *);
+  // Extract the value that will replace the load from the depending
+  // instruction.
+  Value *getExtractedValue(LoadInst *, Instruction *);
+  // If load coercion is successful, the uses of the optimized load might need
+  // to be added to new congruence classes in order to optimize the code
+  // further. For this reason, we run value numbering for all the uses of the
+  // optimized load. If load coercion has failed, then we need to add the load
+  // (and its uses) to the right congruence class.
+  void updateUsesAfterLoadCoercionImpl(LoadInst *,
+                                       SmallVectorImpl<Instruction *> &);
+  void updateUsesAfterLoadCoercion(LoadInst *, Value *);
 };
 
 } // end anonymous namespace
@@ -1439,12 +1486,122 @@ const Expression *NewGVN::performSymbolicStoreEvaluation(Instruction *I) const {
   return createStoreExpression(SI, StoreAccess);
 }
 
+// A load can have one or more dependencies as the following examples show:
+//
+// Example 1:
+//  BB1:
+//   ...
+//   store i32 %V1, ptr %P
+//   ...
+//   %V2 = load i32, ptr %P
+//   ...
+//
+// Example 2:
+//  BB1:                       BB2:
+//   store i32 %V1, ptr %P     %V2 = load i32, ptr %P
+//   br label %BB3              br label %BB3
+//                      \      /
+//                     BB3:
+//                      %V3 = load i32, ptr %P
+//
+// In the first example, the load (%V2) has only one dependency. In the second
+// example, the load (%V3) has two dependencies. Therefore, we add the load
+// along with its two dependencies in LoadCoercion map. However, this is not
+// always the case as it is shown below:
+//
+// Example 3:
+//                   BB1:
+//                    %V1 = load <4 x i32>, ptr %P
+//                    br i1 %cond, label %BB2, label %BB3
+//                   /                          \
+//   BB2:                                      BB3:
+//    %V2 = load <2 x i32>, ptr %P              %V3 = load i32, ptr %P
+//    br label %BB4                             br label %BB4
+//		     \                         /
+//                  BB4:
+//                   %V4 = load i32, ptr %P
+//
+// The %V4 load can be optimized by any of the loads (%V1, %V2, %V3). The loads
+// %V2 and %V3 can also be optimized by %V1. For this reason, we need to do an
+// extra check before we add the load in the map. Hence, we check if the load is
+// already in the map and if the existing depending instruction dominates the
+// current depending instruction. If so, then we do not add the new depending
+// instruction in LoadCoercion map. If the current depending instruction
+// dominates the existing depending instruction, then we remove the existing
+// depending instruction from LoadCoercion map and we add the current depending
+// instruction. In Example 3, the %V4 load has only one dependency (%V1) and we
+// add only this one in LoadCoercion map.
+bool NewGVN::tryAddLoadDepInsnIntoLoadCoercionMap(
+    LoadInst *LI, Instruction *CurrentDepI, BasicBlock *CurrentDepIBB) const {
+  // Can't forward from non-atomic to atomic without violating memory model.
+  if (LI->isAtomic() > CurrentDepI->isAtomic())
+    return false;
+
+  if (auto *DepLI = dyn_cast<LoadInst>(CurrentDepI))
+    if (LI->getAlign() < DepLI->getAlign())
+      return false;
+
+  if (auto *DepSI = dyn_cast<StoreInst>(CurrentDepI))
+    if (LI->getAlign() < DepSI->getAlign())
+      return false;
+
+  // Check if LI already exists in LoadCoercion map.
+  auto It = LoadCoercion.find(LI);
+  if (It != LoadCoercion.end()) {
+    auto &ExistingDepInsns = It->second;
+    // Iterate over all the existing depending instructions of LI.
+    for (auto &P : llvm::make_early_inc_range(ExistingDepInsns)) {
+      Instruction *ExistingDepI = P.first;
+      if (MSSAWalker->getClobberingMemoryAccess(getMemoryAccess(CurrentDepI)) ==
+              MSSAWalker->getClobberingMemoryAccess(
+                  getMemoryAccess(ExistingDepI)) &&
+          isa<LoadInst>(ExistingDepI) && isa<LoadInst>(CurrentDepI)) {
+        // If the existing depending instruction dominates the current depending
+        // instruction, then we should not add the current depending instruction
+        // in LoadCoercion map (Example 3).
+        if (DT->dominates(ExistingDepI, CurrentDepI))
+          return true;
+
+        // If the current depending instruction dominates the existing one, then
+        // we remove the existing depending instruction from the LoadCoercion
+        // map. Next, we add the current depending instruction in LoadCoercion
+        // map.
+        if (DT->dominates(CurrentDepI, ExistingDepI))
+          ExistingDepInsns.erase(P);
+      }
+    }
+  }
+  // Add the load and the corresponding depending instruction in LoadCoercion
+  // map.
+  LoadCoercion[LI].insert(std::make_pair(CurrentDepI, CurrentDepIBB));
+  return true;
+}
+
+// Find load coercion opportunities between load (LI) and store instructions
+// (DepSI).
+bool NewGVN::performSymbolicLoadCoercionForNonConstantMemoryDef(
+    LoadInst *LI, StoreInst *DepSI, MemoryAccess *DefiningAccess) const {
+  Type *LoadType = LI->getType();
+  bool IsLoadCoercionCandidate = false;
+  if (LI->isAtomic() > DepSI->isAtomic() ||
+      LoadType == DepSI->getValueOperand()->getType())
+    return false;
+
+  int Offset = analyzeLoadFromClobberingStore(
+      LoadType, lookupOperandLeader(LI->getPointerOperand()), DepSI, DL);
+  if (Offset >= 0) {
+    IsLoadCoercionCandidate |=
+        tryAddLoadDepInsnIntoLoadCoercionMap(LI, DepSI, DepSI->getParent());
+  }
+
+  return IsLoadCoercionCandidate;
+}
+
 // See if we can extract the value of a loaded pointer from a load, a store, or
 // a memory instruction.
-const Expression *
-NewGVN::performSymbolicLoadCoercion(Type *LoadType, Value *LoadPtr,
-                                    LoadInst *LI, Instruction *DepInst,
-                                    MemoryAccess *DefiningAccess) const {
+const Expression *NewGVN::performSymbolicLoadCoercionForConstantMemoryDef(
+    Type *LoadType, Value *LoadPtr, LoadInst *LI, Instruction *DepInst,
+    MemoryAccess *DefiningAccess) const {
   assert((!LI || LI->isSimple()) && "Not a simple load");
   if (auto *DepSI = dyn_cast<StoreInst>(DepInst)) {
     // Can't forward from non-atomic to atomic without violating memory model.
@@ -1464,21 +1621,6 @@ NewGVN::performSymbolicLoadCoercion(Type *LoadType, Value *LoadPtr,
         }
       }
     }
-  } else if (auto *DepLI = dyn_cast<LoadInst>(DepInst)) {
-    // Can't forward from non-atomic to atomic without violating memory model.
-    if (LI->isAtomic() > DepLI->isAtomic())
-      return nullptr;
-    int Offset = analyzeLoadFromClobberingLoad(LoadType, LoadPtr, DepLI, DL);
-    if (Offset >= 0) {
-      // We can coerce a constant load into a load.
-      if (auto *C = dyn_cast<Constant>(lookupOperandLeader(DepLI)))
-        if (auto *PossibleConstant =
-                getConstantValueForLoad(C, Offset, LoadType, DL)) {
-          LLVM_DEBUG(dbgs() << "Coercing load from load " << *LI
-                            << " to constant " << *PossibleConstant << "\n");
-          return createConstantExpression(PossibleConstant);
-        }
-    }
   } else if (auto *DepMI = dyn_cast<MemIntrinsic>(DepInst)) {
     int Offset = analyzeLoadFromClobberingMemInst(LoadType, LoadPtr, DepMI, DL);
     if (Offset >= 0) {
@@ -1510,11 +1652,24 @@ NewGVN::performSymbolicLoadCoercion(Type *LoadType, Value *LoadPtr,
       return createConstantExpression(UndefValue::get(LoadType));
   } else if (auto *InitVal =
                  getInitialValueOfAllocation(DepInst, TLI, LoadType))
-      return createConstantExpression(InitVal);
+    return createConstantExpression(InitVal);
 
   return nullptr;
 }
 
+const Expression *
+NewGVN::createLoadExpAndUpdateMemUses(LoadInst *LI, Value *LoadAddressLeader,
+                                      MemoryAccess *OriginalAccess,
+                                      MemoryAccess *DefiningAccess) const {
+  const auto *LE = createLoadExpression(LI->getType(), LoadAddressLeader, LI,
+                                        DefiningAccess);
+  // If our MemoryLeader is not our defining access, add a use to the
+  // MemoryLeader, so that we get reprocessed when it changes.
+  if (LE->getMemoryLeader() != DefiningAccess)
+    addMemoryUsers(LE->getMemoryLeader(), OriginalAccess);
+  return LE;
+}
+
 const Expression *NewGVN::performSymbolicLoadEvaluation(Instruction *I) const {
   auto *LI = cast<LoadInst>(I);
 
@@ -1531,6 +1686,22 @@ const Expression *NewGVN::performSymbolicLoadEvaluation(Instruction *I) const {
   MemoryAccess *DefiningAccess =
       MSSAWalker->getClobberingMemoryAccess(OriginalAccess);
 
+  // Do not apply load coercion to load instructions that are candidates of
+  // phi-of-ops optimization.
+  if (TempToBlock.count(LI))
+    return createLoadExpAndUpdateMemUses(LI, LoadAddressLeader, OriginalAccess,
+                                         DefiningAccess);
+
+  // Do not apply load coercion to load isntructions that are generated during
+  // load coercion.
+  auto It = llvm::find(NewLoadsInLoadCoercion, LI);
+  if (It != NewLoadsInLoadCoercion.end())
+    return createLoadExpAndUpdateMemUses(LI, LoadAddressLeader, OriginalAccess,
+                                         DefiningAccess);
+
+  // Check if we can apply load coercion.
+  bool IsLoadCoercionCandidate = false;
+
   if (!MSSA->isLiveOnEntryDef(DefiningAccess)) {
     if (auto *MD = dyn_cast<MemoryDef>(DefiningAccess)) {
       Instruction *DefiningInst = MD->getMemoryInst();
@@ -1542,19 +1713,34 @@ const Expression *NewGVN::performSymbolicLoadEvaluation(Instruction *I) const {
       // certain memory operations that cause the memory to have a fixed value
       // (IE things like calloc).
       if (const auto *CoercionResult =
-              performSymbolicLoadCoercion(LI->getType(), LoadAddressLeader, LI,
-                                          DefiningInst, DefiningAccess))
+              performSymbolicLoadCoercionForConstantMemoryDef(
+                  LI->getType(), LoadAddressLeader, LI, DefiningInst,
+                  DefiningAccess))
         return CoercionResult;
+
+      if (EnableLoadCoercion) {
+        if (auto *DepSI = dyn_cast<StoreInst>(DefiningInst)) {
+          if (!isa<Constant>(lookupOperandLeader(DepSI->getValueOperand()))) {
+            IsLoadCoercionCandidate =
+                performSymbolicLoadCoercionForNonConstantMemoryDef(
+                    LI, DepSI, DefiningAccess);
+          }
+        }
+      }
     }
   }
 
-  const auto *LE = createLoadExpression(LI->getType(), LoadAddressLeader, LI,
-                                        DefiningAccess);
-  // If our MemoryLeader is not our defining access, add a use to the
-  // MemoryLeader, so that we get reprocessed when it changes.
-  if (LE->getMemoryLeader() != DefiningAccess)
-    addMemoryUsers(LE->getMemoryLeader(), OriginalAccess);
-  return LE;
+  // If LI is a candidate for load coercion, then we do not create a load
+  // expression and we remove it from PHINodeUses which keeps the candidates of
+  // phi-of-ops optimization.
+  if (EnableLoadCoercion && IsLoadCoercionCandidate) {
+    if (PHINodeUses.count(LI))
+      const_cast<NewGVN *>(this)->PHINodeUses.erase(LI);
+    return nullptr;
+  }
+  // Otherwise, we create a load expression.
+  return createLoadExpAndUpdateMemUses(LI, LoadAddressLeader, OriginalAccess,
+                                       DefiningAccess);
 }
 
 NewGVN::ExprResult
@@ -3021,6 +3207,17 @@ std::pair<unsigned, unsigned> NewGVN::assignDFSNumbers(BasicBlock *B,
   return std::make_pair(Start, End);
 }
 
+unsigned NewGVN::updateDFSNumbers(unsigned ICount) {
+  // Now a standard depth first ordering of the domtree is equivalent to RPO.
+  for (auto DTN : depth_first(DT->getRootNode())) {
+    BasicBlock *B = DTN->getBlock();
+    const auto &BlockRange = assignDFSNumbers(B, ICount);
+    BlockInstRange.insert({B, BlockRange});
+    ICount += BlockRange.second - BlockRange.first;
+  }
+  return ICount;
+}
+
 void NewGVN::updateProcessedCount(const Value *V) {
 #ifndef NDEBUG
   if (ProcessedCount.count(V) == 0) {
@@ -3458,13 +3655,7 @@ bool NewGVN::runGVN() {
       });
   }
 
-  // Now a standard depth first ordering of the domtree is equivalent to RPO.
-  for (auto *DTN : depth_first(DT->getRootNode())) {
-    BasicBlock *B = DTN->getBlock();
-    const auto &BlockRange = assignDFSNumbers(B, ICount);
-    BlockInstRange.insert({B, BlockRange});
-    ICount += BlockRange.second - BlockRange.first;
-  }
+  ICount = updateDFSNumbers(ICount);
   initializeCongruenceClasses(F);
 
   TouchedInstructions.resize(ICount);
@@ -3485,6 +3676,15 @@ bool NewGVN::runGVN() {
   verifyIterationSettled(F);
   verifyStoreExpressions();
 
+  if (EnableLoadCoercion && implementLoadCoercion()) {
+    // Update the newly generated instructions with the correct DFS numbers.
+    // TODO: Update DFS numbers faster.
+    InstrDFS.clear();
+    DFSToInstr.clear();
+    RevisitOnReachabilityChange.clear();
+    ICount = updateDFSNumbers(0);
+  }
+
   Changed |= eliminateInstructions(F);
 
   // Delete all instructions marked for deletion.
@@ -3821,6 +4021,206 @@ Value *NewGVN::findPHIOfOpsLeader(const Expression *E,
   return nullptr;
 }
 
+// Update MemorySSA for the newly emitted load instruction.
+void NewGVN::updateMemorySSA(Instruction *LoadToOptimize,
+                             Instruction *NewLoad) {
+  MemorySSAUpdater MemSSAUpdater(MSSA);
+  MemoryAccess *DefiningAccess = MSSA->getLiveOnEntryDef();
+  MemoryAccess *NewAccess = MemSSAUpdater.createMemoryAccessInBB(
+      NewLoad, DefiningAccess, NewLoad->getParent(),
+      MemorySSA::BeforeTerminator);
+  if (auto *NewDef = dyn_cast<MemoryDef>(NewAccess))
+    MemSSAUpdater.insertDef(NewDef, /*RenameUses=*/true);
+  else
+    MemSSAUpdater.insertUse(cast<MemoryUse>(NewAccess),
+                            /*RenameUses=*/true);
+}
+
+// Extract the correct value from the depending instruction.
+Value *NewGVN::getExtractedValue(LoadInst *LI, Instruction *DepI) {
+
+  Type *LoadTy = LI->getType();
+  Value *NewValue = nullptr;
+  Instruction *InsertPtr = nullptr;
+  // Emit the instructions that extract the coerced value from the depending
+  // instruction.
+  if (auto *Store = dyn_cast<StoreInst>(DepI)) {
+    int Offset = analyzeLoadFromClobberingStore(LoadTy, LI->getPointerOperand(),
+                                                Store, DL);
+    InsertPtr = Store->getNextNode();
+    NewValue = getValueForLoad(Store->getValueOperand(), Offset, LoadTy,
+                               InsertPtr, DL);
+  } else if (LoadInst *Load = dyn_cast<LoadInst>(DepI)) {
+    int Offset = analyzeLoadFromClobberingLoad(LoadTy, LI->getPointerOperand(),
+                                               Load, DL);
+    InsertPtr = Load->getNextNode();
+    NewValue = getValueForLoad(Load, Offset, LoadTy, InsertPtr, DL);
+  }
+
+  // Get the newly generated instructions and add them to NewLoadsInLoadCoercion
+  // and NewlyGeneratedInsns.
+  if (!isa<Constant>(NewValue) && !isa<Argument>(NewValue))
+    for (Instruction *CurInsn = DepI->getNextNode(); CurInsn != InsertPtr;
+         CurInsn = CurInsn->getNextNode()) {
+      if (LoadInst *NewLI = dyn_cast<LoadInst>(CurInsn)) {
+        updateMemorySSA(LI, NewLI);
+        NewLoadsInLoadCoercion.push_back(LI);
+      }
+      NewlyGeneratedInsns.push_back(CurInsn);
+    }
+
+  return NewValue;
+}
+
+void NewGVN::updateUsesAfterLoadCoercionImpl(
+    LoadInst *LI, SmallVectorImpl<Instruction *> &LIUses) {
+  // Run value numbering for the users of the candidate load instruction.
+  while (!LIUses.empty()) {
+    Instruction *I = LIUses.front();
+    assert(I != LI &&
+           "Vanity check that we do not process the optimized load.\n");
+    LIUses.erase(&*(LIUses.begin()));
+    if (InstructionsToErase.count(I))
+      continue;
+    CongruenceClass *OrigClass = ValueToClass.lookup(I);
+    valueNumberInstruction(I);
+    updateProcessedCount(I);
+    CongruenceClass *NewClass = ValueToClass.lookup(...
[truncated]

Comment on lines 662 to 664
mutable std::map<Value *, DenseSet<std::pair<Instruction *, BasicBlock *>>>
LoadCoercion;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite understand the logic of keeping a set o dependent instructions. AFAICT the size of this set will always be at most 1.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this patch, the load will only have one dependency. But, this is not the case for the other patches e.g.

 Before load coercion
BB1:                           BB2:                           BB3:
 1 = MemoryDef(liveOnEntry)     2 = MemoryDef(liveOnEntry)     3 = MemoryDef(liveOnEntry)
 store i32 100, ptr %P          store i32 500, ptr %P          store i32 1000, ptr %P
 br label %BB4                  br label %BB4                  br label %BB4
         \                            |                              /
                               BB4:
                                3 = MemoryPhi({BB1,1},{BB2,2},{BB3,3})
                                %V = load i32, ptr %P

After load coercion
BB1:                           BB2:                           BB3:
 1 = MemoryDef(liveOnEntry)     2 = MemoryDef(liveOnEntry)     3 = MemoryDef(liveOnEntry)
 store i32 100, ptr %P          store i32 500, ptr %P          store i32 1000, ptr %P
 br label %BB4                  br label %BB4                  br label %BB4
         \                            |                              /
                               BB4:
                                %V = phi i32 [ 100, %BB1 ], [ 500, %BB2 ], [ 1000, %BB3 ]

Here, the load (%V) has three dependencies. Hence, the set will have the following pairs : (store i32 100, ptr %P, %BB1), (store i32 500, ptr %P, %BB2) and (store i32 1000, ptr %P, %BB3).

The intention is to commit all the patches together.

Comment on lines 1551 to 1572
auto &ExistingDepInsns = It->second;
// Iterate over all the existing depending instructions of LI.
for (auto &P : llvm::make_early_inc_range(ExistingDepInsns)) {
Instruction *ExistingDepI = P.first;
if (MSSAWalker->getClobberingMemoryAccess(getMemoryAccess(CurrentDepI)) ==
MSSAWalker->getClobberingMemoryAccess(
getMemoryAccess(ExistingDepI)) &&
isa<LoadInst>(ExistingDepI) && isa<LoadInst>(CurrentDepI)) {
// If the existing depending instruction dominates the current depending
// instruction, then we should not add the current depending instruction
// in LoadCoercion map (Example 3).
if (DT->dominates(ExistingDepI, CurrentDepI))
return true;

// If the current depending instruction dominates the existing one, then
// we remove the existing depending instruction from the LoadCoercion
// map. Next, we add the current depending instruction in LoadCoercion
// map.
if (DT->dominates(CurrentDepI, ExistingDepI))
ExistingDepInsns.erase(P);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests provided here don't seem to cover this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is needed for the second and third patches. I removed it from this patch in order to avoid misunderstandings.

@@ -1,4 +1,3 @@
; XFAIL: *
; RUN: opt -disable-basic-aa -passes=newgvn -S < %s | FileCheck %s
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove -xfail from file name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done :)

// already in the map and if the existing depending instruction dominates the
// current depending instruction. If so, then we do not add the new depending
// instruction in LoadCoercion map. If the current depending instruction
// dominates the existing depending instruction, then we remove the existing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume much of this comment applies to later patches in the patch series? It's confusing since it doesn't describe the code in this patch

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also this comment is somewhat hard to understand. something like

If the existing instruction dominates the current instruction, keep the existing instruction.
Else if the current instruction dominates the existing instruction, replace the existing instruction with the current instruction.
Else add the current instruction alongside the existing instruction (e.g. for Example 2 above).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the comment and I added it in the second patch #68666

@@ -653,6 +658,16 @@ class NewGVN {
// Deletion info.
SmallPtrSet<Instruction *, 8> InstructionsToErase;

// Map candidate load to their depending instructions.
mutable std::map<Value *, DenseSet<std::pair<Instruction *, BasicBlock *>>>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Value * -> LoadInst *

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -653,6 +658,16 @@ class NewGVN {
// Deletion info.
SmallPtrSet<Instruction *, 8> InstructionsToErase;

// Map candidate load to their depending instructions.
mutable std::map<Value *, DenseSet<std::pair<Instruction *, BasicBlock *>>>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is mutable necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is needed because the map is written in const function which is called by another const function. It seems that it is the cleaner way to deal with this situation.

// Iterate over the load instructions of LoadCoercion map and replace them with
// the right sequence of instructions.
bool NewGVN::implementLoadCoercion() {
bool AnythingReplaced = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Changed or MadeChange are more idiomatic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done :)

@@ -3021,6 +3184,17 @@ std::pair<unsigned, unsigned> NewGVN::assignDFSNumbers(BasicBlock *B,
return std::make_pair(Start, End);
}

unsigned NewGVN::updateDFSNumbers(unsigned ICount) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd make the param unsigned &ICount so that callers can't forget to reassign the return value to ICount, then for the call site where you pass in 0, set ICount to 0 instead and pass that in

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! I updated the code :)

Load coercion consists of two phases:
1. Collection of the load instructions that can be optiimized with load coercion.
We collect pairs of candidate load and its depending instructions. The candidate
load is the laod that will be eliminated by the value that we will extract from
the depending instruction. The reason that we can eliminate the candidate load is
because its memory location overlaps with the memory location of the depending
instruction.
For example, in the following snippet, the candidate load is %V2 and the
depending instruction is the store.

```
Beofre load coercion               After load coercion
BB1:                               BB1:
 store i32 100, ptr %P              store i32 100, ptr %P
 %V1 = ...                   =>     %V1 = ...
 %V2 = load i32, ptr %P             %V3 = add i32 %V1, 100
 %V3 = add i32 %V1, %V2
```

2. Code generation for load coercion: This phase updatest the IR by eliminating
the candidate load and by updating its uses.

This patch implements load coercion between a load candidate and a store
depending instruction. The follow-up patches implement load coercion support for
instructions that have live on entry definitions and MemoryPhi definitions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants