- 
                Notifications
    
You must be signed in to change notification settings  - Fork 15.1k
 
[DA] Cache delinearization results. NFCI. #164379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
An instruction can appear in multiple source-destination dependency pairs. If this is the case, delinearization is requested and recomputed for the same instruction again and again. Instead, cache the delinearization and query the cache first before computing it. I made this observation while going through debug logs for DA, and wanted to test whether you like this idea or not before I try to measure whether this has a compile-time benefit, which is of course the reason to do this.
| 
          
 @llvm/pr-subscribers-llvm-analysis Author: Sjoerd Meijer (sjoerdmeijer) ChangesAn instruction can appear in multiple source-destination dependency pairs. If this is the case, delinearization is requested and recomputed for the same instruction again and again. Instead, cache the delinearization and query the cache first before computing it. I made this observation while going through debug logs for DA, and wanted to test whether you like this idea or not before I try to measure whether this has a compile-time benefit, which is of course the reason to do this. I was just looking at this example: and noticed that we delinearize first this: then this: and then this: With this change, we will cache the src and dst subscripts in the first call: In the second call, cache the dst: and the third call has a cache hit for both the dst and src: Full diff: https://github.com/llvm/llvm-project/pull/164379.diff 2 Files Affected: 
 diff --git a/llvm/include/llvm/Analysis/DependenceAnalysis.h b/llvm/include/llvm/Analysis/DependenceAnalysis.h
index 18a8f8aabb44a..04fa9ad0774bd 100644
--- a/llvm/include/llvm/Analysis/DependenceAnalysis.h
+++ b/llvm/include/llvm/Analysis/DependenceAnalysis.h
@@ -420,6 +420,12 @@ class DependenceInfo {
   Function *F;
   SmallVector<const SCEVPredicate *, 4> Assumptions;
 
+  /// Cache for delinearized subscripts to avoid recomputation.
+  /// Maps (Instruction, Loop, AccessFn) -> Subscripts
+  DenseMap<std::tuple<Instruction *, Loop *, const SCEV *>,
+           SmallVector<const SCEV *, 4>>
+      DelinearizationCache;
+
   /// Subscript - This private struct represents a pair of subscripts from
   /// a pair of potentially multi-dimensional array references. We use a
   /// vector of them to guide subscript partitioning.
diff --git a/llvm/lib/Analysis/DependenceAnalysis.cpp b/llvm/lib/Analysis/DependenceAnalysis.cpp
index 805b6820e1e1c..7e413c65a71a6 100644
--- a/llvm/lib/Analysis/DependenceAnalysis.cpp
+++ b/llvm/lib/Analysis/DependenceAnalysis.cpp
@@ -3463,11 +3463,37 @@ bool DependenceInfo::tryDelinearize(Instruction *Src, Instruction *Dst,
 
   SmallVector<const SCEV *, 4> SrcSubscripts, DstSubscripts;
 
-  if (!tryDelinearizeFixedSize(Src, Dst, SrcAccessFn, DstAccessFn,
-                               SrcSubscripts, DstSubscripts) &&
-      !tryDelinearizeParametricSize(Src, Dst, SrcAccessFn, DstAccessFn,
-                                    SrcSubscripts, DstSubscripts))
-    return false;
+  // Check cache for both Src and Dst subscripts
+  auto SrcCacheKey = std::make_tuple(Src, SrcLoop, SrcAccessFn);
+  auto DstCacheKey = std::make_tuple(Dst, DstLoop, DstAccessFn);
+  auto SrcCacheIt = DelinearizationCache.find(SrcCacheKey);
+  auto DstCacheIt = DelinearizationCache.find(DstCacheKey);
+  bool SrcCached = (SrcCacheIt != DelinearizationCache.end());
+  bool DstCached = (DstCacheIt != DelinearizationCache.end());
+
+  if (SrcCached && DstCached) {
+    // Both are cached - use cached values and skip delinearization
+    SrcSubscripts = SrcCacheIt->second;
+    DstSubscripts = DstCacheIt->second;
+    LLVM_DEBUG(dbgs() << "  Delinearization cache hit for both Src and Dst\n");
+  } else {
+    // At least one is not cached - need to compute both
+    if (!tryDelinearizeFixedSize(Src, Dst, SrcAccessFn, DstAccessFn,
+                                 SrcSubscripts, DstSubscripts) &&
+        !tryDelinearizeParametricSize(Src, Dst, SrcAccessFn, DstAccessFn,
+                                      SrcSubscripts, DstSubscripts))
+      return false;
+
+    // Cache the results
+    if (!SrcCached) {
+      DelinearizationCache[SrcCacheKey] = SrcSubscripts;
+      LLVM_DEBUG(dbgs() << "  Cached Src subscripts\n");
+    }
+    if (!DstCached) {
+      DelinearizationCache[DstCacheKey] = DstSubscripts;
+      LLVM_DEBUG(dbgs() << "  Cached Dst subscripts\n");
+    }
+  }
 
   assert(isLoopInvariant(SrcBase, SrcLoop) &&
          isLoopInvariant(DstBase, DstLoop) &&
 | 
    
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking forward the performance measurements. Consider implementing it as part of delineraization (e.g. as a kind of analysis) itself, so it can be used outside of DA itself.
| /// Cache for delinearized subscripts to avoid recomputation. | ||
| /// Maps (Instruction, Loop, AccessFn) -> Subscripts | ||
| DenseMap<std::tuple<Instruction *, Loop *, const SCEV *>, | ||
| SmallVector<const SCEV *, 4>> | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| SmallVector<const SCEV *, 4>> | |
| SmallVector<const SCEV *, 1>> | 
Consider a smaller SmallVector, maybe even SmallVector<const SCEV *, 0>. DenseMap only uses a fraction of its entries, big item and key objects waste a lot of space.
| // Check cache for both Src and Dst subscripts | ||
| auto SrcCacheKey = std::make_tuple(Src, SrcLoop, SrcAccessFn); | ||
| auto DstCacheKey = std::make_tuple(Dst, DstLoop, DstAccessFn); | ||
| auto SrcCacheIt = DelinearizationCache.find(SrcCacheKey); | ||
| auto DstCacheIt = DelinearizationCache.find(DstCacheKey); | ||
| bool SrcCached = (SrcCacheIt != DelinearizationCache.end()); | ||
| bool DstCached = (DstCacheIt != DelinearizationCache.end()); | ||
| 
               | 
          ||
| if (SrcCached && DstCached) { | ||
| // Both are cached - use cached values and skip delinearization | ||
| SrcSubscripts = SrcCacheIt->second; | ||
| DstSubscripts = DstCacheIt->second; | ||
| LLVM_DEBUG(dbgs() << " Delinearization cache hit for both Src and Dst\n"); | ||
| } else { | ||
| // At least one is not cached - need to compute both | ||
| if (!tryDelinearizeFixedSize(Src, Dst, SrcAccessFn, DstAccessFn, | ||
| SrcSubscripts, DstSubscripts) && | ||
| !tryDelinearizeParametricSize(Src, Dst, SrcAccessFn, DstAccessFn, | ||
| SrcSubscripts, DstSubscripts)) | ||
| return false; | ||
| 
               | 
          ||
| // Cache the results | ||
| if (!SrcCached) { | ||
| DelinearizationCache[SrcCacheKey] = SrcSubscripts; | ||
| LLVM_DEBUG(dbgs() << " Cached Src subscripts\n"); | ||
| } | ||
| if (!DstCached) { | ||
| DelinearizationCache[DstCacheKey] = DstSubscripts; | ||
| LLVM_DEBUG(dbgs() << " Cached Dst subscripts\n"); | ||
| } | ||
| } | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider refactoring into a function for lookup.
Src and Dst computation need to be disentangled or looked up together. tryDelinearizeFixedSize implementes a consistency check, eg. that both have the same dimensions. This isn't guaranteed of they are looked up independently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caching itself looks reasonable to me, but I have a high-level concern: DA and Delinearization are currently tightly coupled. Before making further improvements, I think it would be better to first move some of the related logic from DA to Delinearization.
| 
               | 
          ||
| /// Cache for delinearized subscripts to avoid recomputation. | ||
| /// Maps (Instruction, Loop, AccessFn) -> Subscripts | ||
| DenseMap<std::tuple<Instruction *, Loop *, const SCEV *>, | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the key need to be a tuple? Wouldn't Instruction * or const SCEV * be sufficient?
| auto SrcCacheIt = DelinearizationCache.find(SrcCacheKey); | ||
| auto DstCacheIt = DelinearizationCache.find(DstCacheKey); | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using try_emplace to avoid repeated hash lookup.
| 
           Thanks both for the feedback. 
 Ok, I will first get some numbers, to see if there's at least potential. Maybe we can then discuss where this belongs.  | 
    
An instruction can appear in multiple source-destination dependency pairs. If this is the case, delinearization is requested and recomputed for the same instruction again and again. Instead, cache the delinearization and query the cache first before computing it. I made this observation while going through debug logs for DA, and wanted to test whether you like this idea or not before I try to measure whether this has a compile-time benefit, which is of course the reason to do this.
I was just looking at this example:
and noticed that we delinearize first this:
then this:
and then this:
With this change, we will cache the src and dst subscripts in the first call:
In the second call, cache the dst:
and the third call has a cache hit for both the dst and src: