Skip to content

Conversation

@sjoerdmeijer
Copy link
Collaborator

An instruction can appear in multiple source-destination dependency pairs. If this is the case, delinearization is requested and recomputed for the same instruction again and again. Instead, cache the delinearization and query the cache first before computing it. I made this observation while going through debug logs for DA, and wanted to test whether you like this idea or not before I try to measure whether this has a compile-time benefit, which is of course the reason to do this.

I was just looking at this example:

loop:
  %i = phi i64 [ 0, %entry ], [ %i.next, %loop ]
  %subscript.0 = mul i64 %mk, %i
  %subscript.1 = add i64 %subscript.0, %kk.inc
  %idx.0 = getelementptr i8, ptr %a, i64 %subscript.0 ; a[-k * i]
  %idx.1 = getelementptr i8, ptr %a, i64 %subscript.1 ; a[-k * i + (2 * k + 1)]
  store i8 42, ptr %idx.0
  store i8 42, ptr %idx.1
  %i.next = add i64 %i, 1
  %cond.exit = icmp eq i64 %i.next, 3
  br i1 %cond.exit, label %exit, label %loop

and noticed that we delinearize first this:

Src:  store i8 42, ptr %idx.0, align 1 --> Dst:  store i8 42, ptr %idx.0, align 1
  da analyze -     SrcSCEV = {%a,+,(-1 * %k)}<%loop>
    DstSCEV = {%a,+,(-1 * %k)}<%loop>

GEP to delinearize:   %idx.0 = getelementptr i8, ptr %a, i64 %subscript.0

then this:

Src:  store i8 42, ptr %idx.0, align 1 --> Dst:  store i8 42, ptr %idx.1, align 1
  da analyze -     SrcSCEV = {%a,+,(-1 * %k)}<%loop>
    DstSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}<%loop>

GEP to delinearize:   %idx.0 = getelementptr i8, ptr %a, i64 %subscript.0

and then this:

Src:  store i8 42, ptr %idx.1, align 1 --> Dst:  store i8 42, ptr %idx.1, align 1
  da analyze -     SrcSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}<%loop>
    DstSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}<%loop>

GEP to delinearize:   %idx.1 = getelementptr i8, ptr %a, i64 %subscript.1

With this change, we will cache the src and dst subscripts in the first call:

Src:  store i8 42, ptr %idx.0, align 1 --> Dst:  store i8 42, ptr %idx.0, align 1
  da analyze -     SrcSCEV = {%a,+,(-1 * %k)}<%loop>
    DstSCEV = {%a,+,(-1 * %k)}<%loop>
  Cached Src subscripts
  Cached Dst subscripts

In the second call, cache the dst:

Src:  store i8 42, ptr %idx.0, align 1 --> Dst:  store i8 42, ptr %idx.1, align 1
  da analyze -     SrcSCEV = {%a,+,(-1 * %k)}<%loop>
    DstSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}<%loop>
  Cached Dst subscripts

and the third call has a cache hit for both the dst and src:

Src:  store i8 42, ptr %idx.1, align 1 --> Dst:  store i8 42, ptr %idx.1, align 1
  da analyze -     SrcSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}<%loop>
    DstSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}<%loop>
  Delinearization cache hit for both Src and Dst

An instruction can appear in multiple source-destination dependency
pairs. If this is the case, delinearization is requested and recomputed
for the same instruction again and again. Instead, cache the
delinearization and query the cache first before computing it. I made this
observation while going through debug logs for DA, and wanted to test
whether you like this idea or not before I try to measure whether this
has a compile-time benefit, which is of course the reason to do this.
@llvmbot llvmbot added the llvm:analysis Includes value tracking, cost tables and constant folding label Oct 21, 2025
@sjoerdmeijer sjoerdmeijer requested a review from sebpop October 21, 2025 09:14
@llvmbot
Copy link
Member

llvmbot commented Oct 21, 2025

@llvm/pr-subscribers-llvm-analysis

Author: Sjoerd Meijer (sjoerdmeijer)

Changes

An instruction can appear in multiple source-destination dependency pairs. If this is the case, delinearization is requested and recomputed for the same instruction again and again. Instead, cache the delinearization and query the cache first before computing it. I made this observation while going through debug logs for DA, and wanted to test whether you like this idea or not before I try to measure whether this has a compile-time benefit, which is of course the reason to do this.

I was just looking at this example:

loop:
  %i = phi i64 [ 0, %entry ], [ %i.next, %loop ]
  %subscript.0 = mul i64 %mk, %i
  %subscript.1 = add i64 %subscript.0, %kk.inc
  %idx.0 = getelementptr i8, ptr %a, i64 %subscript.0 ; a[-k * i]
  %idx.1 = getelementptr i8, ptr %a, i64 %subscript.1 ; a[-k * i + (2 * k + 1)]
  store i8 42, ptr %idx.0
  store i8 42, ptr %idx.1
  %i.next = add i64 %i, 1
  %cond.exit = icmp eq i64 %i.next, 3
  br i1 %cond.exit, label %exit, label %loop

and noticed that we delinearize first this:

Src:  store i8 42, ptr %idx.0, align 1 --&gt; Dst:  store i8 42, ptr %idx.0, align 1
  da analyze -     SrcSCEV = {%a,+,(-1 * %k)}&lt;%loop&gt;
    DstSCEV = {%a,+,(-1 * %k)}&lt;%loop&gt;

GEP to delinearize:   %idx.0 = getelementptr i8, ptr %a, i64 %subscript.0

then this:

Src:  store i8 42, ptr %idx.0, align 1 --&gt; Dst:  store i8 42, ptr %idx.1, align 1
  da analyze -     SrcSCEV = {%a,+,(-1 * %k)}&lt;%loop&gt;
    DstSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}&lt;%loop&gt;

GEP to delinearize:   %idx.0 = getelementptr i8, ptr %a, i64 %subscript.0

and then this:

Src:  store i8 42, ptr %idx.1, align 1 --&gt; Dst:  store i8 42, ptr %idx.1, align 1
  da analyze -     SrcSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}&lt;%loop&gt;
    DstSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}&lt;%loop&gt;

GEP to delinearize:   %idx.1 = getelementptr i8, ptr %a, i64 %subscript.1

With this change, we will cache the src and dst subscripts in the first call:

Src:  store i8 42, ptr %idx.0, align 1 --&gt; Dst:  store i8 42, ptr %idx.0, align 1
  da analyze -     SrcSCEV = {%a,+,(-1 * %k)}&lt;%loop&gt;
    DstSCEV = {%a,+,(-1 * %k)}&lt;%loop&gt;
  Cached Src subscripts
  Cached Dst subscripts

In the second call, cache the dst:

Src:  store i8 42, ptr %idx.0, align 1 --&gt; Dst:  store i8 42, ptr %idx.1, align 1
  da analyze -     SrcSCEV = {%a,+,(-1 * %k)}&lt;%loop&gt;
    DstSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}&lt;%loop&gt;
  Cached Dst subscripts

and the third call has a cache hit for both the dst and src:

Src:  store i8 42, ptr %idx.1, align 1 --&gt; Dst:  store i8 42, ptr %idx.1, align 1
  da analyze -     SrcSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}&lt;%loop&gt;
    DstSCEV = {(1 + (2 * %k) + %a),+,(-1 * %k)}&lt;%loop&gt;
  Delinearization cache hit for both Src and Dst

Full diff: https://github.com/llvm/llvm-project/pull/164379.diff

2 Files Affected:

  • (modified) llvm/include/llvm/Analysis/DependenceAnalysis.h (+6)
  • (modified) llvm/lib/Analysis/DependenceAnalysis.cpp (+31-5)
diff --git a/llvm/include/llvm/Analysis/DependenceAnalysis.h b/llvm/include/llvm/Analysis/DependenceAnalysis.h
index 18a8f8aabb44a..04fa9ad0774bd 100644
--- a/llvm/include/llvm/Analysis/DependenceAnalysis.h
+++ b/llvm/include/llvm/Analysis/DependenceAnalysis.h
@@ -420,6 +420,12 @@ class DependenceInfo {
   Function *F;
   SmallVector<const SCEVPredicate *, 4> Assumptions;
 
+  /// Cache for delinearized subscripts to avoid recomputation.
+  /// Maps (Instruction, Loop, AccessFn) -> Subscripts
+  DenseMap<std::tuple<Instruction *, Loop *, const SCEV *>,
+           SmallVector<const SCEV *, 4>>
+      DelinearizationCache;
+
   /// Subscript - This private struct represents a pair of subscripts from
   /// a pair of potentially multi-dimensional array references. We use a
   /// vector of them to guide subscript partitioning.
diff --git a/llvm/lib/Analysis/DependenceAnalysis.cpp b/llvm/lib/Analysis/DependenceAnalysis.cpp
index 805b6820e1e1c..7e413c65a71a6 100644
--- a/llvm/lib/Analysis/DependenceAnalysis.cpp
+++ b/llvm/lib/Analysis/DependenceAnalysis.cpp
@@ -3463,11 +3463,37 @@ bool DependenceInfo::tryDelinearize(Instruction *Src, Instruction *Dst,
 
   SmallVector<const SCEV *, 4> SrcSubscripts, DstSubscripts;
 
-  if (!tryDelinearizeFixedSize(Src, Dst, SrcAccessFn, DstAccessFn,
-                               SrcSubscripts, DstSubscripts) &&
-      !tryDelinearizeParametricSize(Src, Dst, SrcAccessFn, DstAccessFn,
-                                    SrcSubscripts, DstSubscripts))
-    return false;
+  // Check cache for both Src and Dst subscripts
+  auto SrcCacheKey = std::make_tuple(Src, SrcLoop, SrcAccessFn);
+  auto DstCacheKey = std::make_tuple(Dst, DstLoop, DstAccessFn);
+  auto SrcCacheIt = DelinearizationCache.find(SrcCacheKey);
+  auto DstCacheIt = DelinearizationCache.find(DstCacheKey);
+  bool SrcCached = (SrcCacheIt != DelinearizationCache.end());
+  bool DstCached = (DstCacheIt != DelinearizationCache.end());
+
+  if (SrcCached && DstCached) {
+    // Both are cached - use cached values and skip delinearization
+    SrcSubscripts = SrcCacheIt->second;
+    DstSubscripts = DstCacheIt->second;
+    LLVM_DEBUG(dbgs() << "  Delinearization cache hit for both Src and Dst\n");
+  } else {
+    // At least one is not cached - need to compute both
+    if (!tryDelinearizeFixedSize(Src, Dst, SrcAccessFn, DstAccessFn,
+                                 SrcSubscripts, DstSubscripts) &&
+        !tryDelinearizeParametricSize(Src, Dst, SrcAccessFn, DstAccessFn,
+                                      SrcSubscripts, DstSubscripts))
+      return false;
+
+    // Cache the results
+    if (!SrcCached) {
+      DelinearizationCache[SrcCacheKey] = SrcSubscripts;
+      LLVM_DEBUG(dbgs() << "  Cached Src subscripts\n");
+    }
+    if (!DstCached) {
+      DelinearizationCache[DstCacheKey] = DstSubscripts;
+      LLVM_DEBUG(dbgs() << "  Cached Dst subscripts\n");
+    }
+  }
 
   assert(isLoopInvariant(SrcBase, SrcLoop) &&
          isLoopInvariant(DstBase, DstLoop) &&

Copy link
Member

@Meinersbur Meinersbur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking forward the performance measurements. Consider implementing it as part of delineraization (e.g. as a kind of analysis) itself, so it can be used outside of DA itself.

/// Cache for delinearized subscripts to avoid recomputation.
/// Maps (Instruction, Loop, AccessFn) -> Subscripts
DenseMap<std::tuple<Instruction *, Loop *, const SCEV *>,
SmallVector<const SCEV *, 4>>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
SmallVector<const SCEV *, 4>>
SmallVector<const SCEV *, 1>>

Consider a smaller SmallVector, maybe even SmallVector<const SCEV *, 0>. DenseMap only uses a fraction of its entries, big item and key objects waste a lot of space.

Comment on lines +3466 to +3496
// Check cache for both Src and Dst subscripts
auto SrcCacheKey = std::make_tuple(Src, SrcLoop, SrcAccessFn);
auto DstCacheKey = std::make_tuple(Dst, DstLoop, DstAccessFn);
auto SrcCacheIt = DelinearizationCache.find(SrcCacheKey);
auto DstCacheIt = DelinearizationCache.find(DstCacheKey);
bool SrcCached = (SrcCacheIt != DelinearizationCache.end());
bool DstCached = (DstCacheIt != DelinearizationCache.end());

if (SrcCached && DstCached) {
// Both are cached - use cached values and skip delinearization
SrcSubscripts = SrcCacheIt->second;
DstSubscripts = DstCacheIt->second;
LLVM_DEBUG(dbgs() << " Delinearization cache hit for both Src and Dst\n");
} else {
// At least one is not cached - need to compute both
if (!tryDelinearizeFixedSize(Src, Dst, SrcAccessFn, DstAccessFn,
SrcSubscripts, DstSubscripts) &&
!tryDelinearizeParametricSize(Src, Dst, SrcAccessFn, DstAccessFn,
SrcSubscripts, DstSubscripts))
return false;

// Cache the results
if (!SrcCached) {
DelinearizationCache[SrcCacheKey] = SrcSubscripts;
LLVM_DEBUG(dbgs() << " Cached Src subscripts\n");
}
if (!DstCached) {
DelinearizationCache[DstCacheKey] = DstSubscripts;
LLVM_DEBUG(dbgs() << " Cached Dst subscripts\n");
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider refactoring into a function for lookup.

Src and Dst computation need to be disentangled or looked up together. tryDelinearizeFixedSize implementes a consistency check, eg. that both have the same dimensions. This isn't guaranteed of they are looked up independently.

Copy link
Contributor

@kasuga-fj kasuga-fj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caching itself looks reasonable to me, but I have a high-level concern: DA and Delinearization are currently tightly coupled. Before making further improvements, I think it would be better to first move some of the related logic from DA to Delinearization.


/// Cache for delinearized subscripts to avoid recomputation.
/// Maps (Instruction, Loop, AccessFn) -> Subscripts
DenseMap<std::tuple<Instruction *, Loop *, const SCEV *>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the key need to be a tuple? Wouldn't Instruction * or const SCEV * be sufficient?

Comment on lines +3469 to +3470
auto SrcCacheIt = DelinearizationCache.find(SrcCacheKey);
auto DstCacheIt = DelinearizationCache.find(DstCacheKey);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using try_emplace to avoid repeated hash lookup.

@sjoerdmeijer
Copy link
Collaborator Author

Thanks both for the feedback.

but I have a high-level concern: DA and Delinearization are currently tightly coupled. Before making further improvements, I think it would be better to first move some of the related logic from DA to Delinearization.

Ok, I will first get some numbers, to see if there's at least potential. Maybe we can then discuss where this belongs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

llvm:analysis Includes value tracking, cost tables and constant folding

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants