-
Notifications
You must be signed in to change notification settings - Fork 15.2k
[AA] Improve precision for monotonic atomic load/store operations #158169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[AA] Improve precision for monotonic atomic load/store operations #158169
Conversation
@llvm/pr-subscribers-llvm-transforms @llvm/pr-subscribers-llvm-analysis Author: Jin Huang (jinhuang1102) ChangesRefines This allows AA to return more precise results (e.g., LIT testing:
Full diff: https://github.com/llvm/llvm-project/pull/158169.diff 2 Files Affected:
diff --git a/llvm/lib/Analysis/AliasAnalysis.cpp b/llvm/lib/Analysis/AliasAnalysis.cpp
index 3ec009ca4adde..cb9577aac405a 100644
--- a/llvm/lib/Analysis/AliasAnalysis.cpp
+++ b/llvm/lib/Analysis/AliasAnalysis.cpp
@@ -421,9 +421,18 @@ ModRefInfo AAResults::getModRefInfo(const LoadInst *L,
const MemoryLocation &Loc,
AAQueryInfo &AAQI) {
// Be conservative in the face of atomic.
- if (isStrongerThan(L->getOrdering(), AtomicOrdering::Unordered))
+ if (isStrongerThan(L->getOrdering(), AtomicOrdering::Monotonic))
return ModRefInfo::ModRef;
+ // For Monotonic and unordered atomic loads, if the locations are not NoAlias,
+ // we must be conservative and return ModRef to prevent unsafe reordering of
+ // accesses to the same memory.
+ if (L->isAtomic()){
+ if (Loc.Ptr &&
+ alias(MemoryLocation::get(L), Loc, AAQI, L) != AliasResult::NoAlias)
+ return ModRefInfo::ModRef;
+ }
+
// If the load address doesn't alias the given address, it doesn't read
// or write the specified memory.
if (Loc.Ptr) {
@@ -439,7 +448,7 @@ ModRefInfo AAResults::getModRefInfo(const StoreInst *S,
const MemoryLocation &Loc,
AAQueryInfo &AAQI) {
// Be conservative in the face of atomic.
- if (isStrongerThan(S->getOrdering(), AtomicOrdering::Unordered))
+ if (isStrongerThan(S->getOrdering(), AtomicOrdering::Monotonic))
return ModRefInfo::ModRef;
if (Loc.Ptr) {
@@ -458,7 +467,7 @@ ModRefInfo AAResults::getModRefInfo(const StoreInst *S,
}
// Otherwise, a store just writes.
- return ModRefInfo::Mod;
+ return ModRefInfo::ModRef;
}
ModRefInfo AAResults::getModRefInfo(const FenceInst *S,
diff --git a/llvm/test/Transforms/DeadStoreElimination/atomic-todo.ll b/llvm/test/Transforms/DeadStoreElimination/atomic-todo.ll
index 1c160442f8579..85a4a7fdd79e3 100644
--- a/llvm/test/Transforms/DeadStoreElimination/atomic-todo.ll
+++ b/llvm/test/Transforms/DeadStoreElimination/atomic-todo.ll
@@ -1,4 +1,3 @@
-; XFAIL: *
; RUN: opt -passes=dse -S < %s | FileCheck %s
target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64"
@@ -11,7 +10,7 @@ target triple = "x86_64-apple-macosx10.7.0"
@x = common global i32 0, align 4
@y = common global i32 0, align 4
-; DSE across monotonic load (allowed as long as the eliminated store isUnordered)
+; DSE across monotonic load (allowed if the monotonic load's address is NoAlias)
define i32 @test9() {
; CHECK-LABEL: test9
; CHECK-NOT: store i32 0
@@ -21,3 +20,17 @@ define i32 @test9() {
store i32 1, ptr @x
ret i32 %x
}
+
+; DSE across monotonic load (blocked if the atomic load's address isn't NoAlias)
+define i32 @test9a(ptr %ptr) {
+; CHECK-LABEL: @test9a(
+; CHECK-NEXT: store i32 0, ptr @x, align 4
+; CHECK-NEXT: [[X:%.*]] = load atomic i32, ptr [[PTR:%.*]] monotonic, align 4
+; CHECK-NEXT: store i32 1, ptr @x, align 4
+; CHECK-NEXT: ret i32 [[X]]
+;
+ store i32 0, ptr @x
+ %x = load atomic i32, ptr %ptr monotonic, align 4
+ store i32 1, ptr @x
+ ret i32 %x
+}
|
✅ With the latest revision this PR passed the C/C++ code formatter. |
llvm/lib/Analysis/AliasAnalysis.cpp
Outdated
if (Loc.Ptr && | ||
alias(MemoryLocation::get(L), Loc, AAQI, L) != AliasResult::NoAlias) | ||
return ModRefInfo::ModRef; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be too much for unordered atomics.
This also duplicates the code block below. I'd drop this block and then insert a check before the existing return that does:
if (isStrongerThanUnordered(L->getOrdering()))
return ModRefInfo::ModRef;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. You means that treat unordered
like the non-atomic load and add the following code:
if (isStrongerThanUnordered(L->getOrdering()))
return ModRefInfo::ModRef;
After the AliasResult AR
check:
if (Loc.Ptr) {
AliasResult AR = alias(MemoryLocation::get(L), Loc, AAQI, L);
if (AR == AliasResult::NoAlias)
return ModRefInfo::NoModRef;
}
Am I right?
llvm/lib/Analysis/AliasAnalysis.cpp
Outdated
@@ -458,7 +467,7 @@ ModRefInfo AAResults::getModRefInfo(const StoreInst *S, | |||
} | |||
|
|||
// Otherwise, a store just writes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar as in the load case, should only return ModRef if stronger than unordered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we need to add the similar if condition here?
92c25ff
to
6526ddf
Compare
llvm/lib/Analysis/AliasAnalysis.cpp
Outdated
if (Loc.Ptr && | ||
alias(MemoryLocation::get(L), Loc, AAQI, L) != AliasResult::NoAlias) | ||
return ModRefInfo::ModRef; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. You means that treat unordered
like the non-atomic load and add the following code:
if (isStrongerThanUnordered(L->getOrdering()))
return ModRefInfo::ModRef;
After the AliasResult AR
check:
if (Loc.Ptr) {
AliasResult AR = alias(MemoryLocation::get(L), Loc, AAQI, L);
if (AR == AliasResult::NoAlias)
return ModRefInfo::NoModRef;
}
Am I right?
llvm/lib/Analysis/AliasAnalysis.cpp
Outdated
@@ -458,7 +467,7 @@ ModRefInfo AAResults::getModRefInfo(const StoreInst *S, | |||
} | |||
|
|||
// Otherwise, a store just writes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we need to add the similar if condition here?
6526ddf
to
23ca079
Compare
61e7858
to
a269fe1
Compare
llvm/lib/Analysis/AliasAnalysis.cpp
Outdated
@@ -431,6 +431,14 @@ ModRefInfo AAResults::getModRefInfo(const LoadInst *L, | |||
if (AR == AliasResult::NoAlias) | |||
return ModRefInfo::NoModRef; | |||
} | |||
|
|||
// At this point, the load's ordering is at most `Monotonic` (i.e., Monotonic, | |||
// Unordered, or non-atomic), and it aliases with `Loc`. The condition |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change the comments to assertion instead. The comment is not helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace the comments by
assert(!isStrongerThanMonotonic(L->getOrdering()) &&
"Stronger atomic orderings should have been handled above!" );
c9f4b9d
to
b482d19
Compare
b482d19
to
73c83bb
Compare
6ef0d06
to
297c1a5
Compare
move the |
297c1a5
to
562a22c
Compare
lgtm, but please wait for nikic@'s approval. |
562a22c
to
4b63c54
Compare
Refines
getModRefInfo
forMonotonic
andUnordered
atomic operations to perform a full alias evaluation instead of conservatively returningModRef
.This allows AA to return more precise results (e.g.,
NoModRef
), unblocking optimizations in passes like DSE/PRE when the pointers are proven not to alias.LIT testing:
XFAIL
test@test9
inDSE/atomic-todo.ll
, where the improved AA now allows DSE to correctly remove a dead store.@test9a
to the same file to ensure DSE is still blocked when pointers may alias.