-
Notifications
You must be signed in to change notification settings - Fork 15.5k
[OpenMP][MLIR] Hoist static allocas emitted by private init regions to the allocation IP of the construct
#171597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@llvm/pr-subscribers-mlir-openmp @llvm/pr-subscribers-mlir Author: Kareem Ergawy (ergawy) ChangesHaving more than 1 descritpr (allocatable or array) on the same For SPMD kernels, the issue happens because the initialization logic includes:
Because of these 2 things, temp allocations needed for descriptors beyond the 1st one are preceded by branching which causes the observed the runtime crash. This PR solves this issue by hoisting these static Full diff: https://github.com/llvm/llvm-project/pull/171597.diff 2 Files Affected:
diff --git a/mlir/lib/Target/LLVMIR/Dialect/OpenMP/OpenMPToLLVMIRTranslation.cpp b/mlir/lib/Target/LLVMIR/Dialect/OpenMP/OpenMPToLLVMIRTranslation.cpp
index 03d67a52853f6..350763fac0d22 100644
--- a/mlir/lib/Target/LLVMIR/Dialect/OpenMP/OpenMPToLLVMIRTranslation.cpp
+++ b/mlir/lib/Target/LLVMIR/Dialect/OpenMP/OpenMPToLLVMIRTranslation.cpp
@@ -1596,10 +1596,64 @@ static llvm::Expected<llvm::Value *> initPrivateVar(
return phis[0];
}
+/// Beginning with \p startBlock, this function visits all reachable successor
+/// blocks. For each such block, static alloca instructions (i.e. non-array
+/// allocas) are collected. Then, these collected alloca instructions are moved
+/// to the \p allocaIP insertion point.
+///
+/// This is useful in cases where, for example, more than one allocatable or
+/// array are privatized. In such cases, we allocate a number of temporary
+/// descriptors to handle the initialization logic. Additonally, for each
+/// private value, there is branching logic based on the value of the origianl
+/// private variable's allocation state. Therefore, we end up with descriptor
+/// alloca instructions preceded by conditional branches which casues runtime
+/// issues at least on the GPU.
+static void hoistStaticAllocasToAllocaIP(
+ llvm::BasicBlock *startBlock,
+ const llvm::OpenMPIRBuilder::InsertPointTy &allocaIP) {
+ llvm::SmallVector<llvm::BasicBlock *> inlinedBlocks{startBlock};
+ llvm::SmallPtrSet<llvm::BasicBlock *, 4> seenBlocks;
+ llvm::SmallVector<llvm::Instruction *> staticAllocas;
+
+ while (!inlinedBlocks.empty()) {
+ llvm::BasicBlock *curBlock = inlinedBlocks.front();
+ inlinedBlocks.erase(inlinedBlocks.begin());
+ llvm::Instruction *terminator = curBlock->getTerminator();
+
+ for (llvm::Instruction &inst : *curBlock) {
+ if (auto *allocaInst = mlir::dyn_cast<llvm::AllocaInst>(&inst)) {
+ if (!allocaInst->isArrayAllocation()) {
+#ifdef EXPENSIVE_CHECKS
+ assert(llvm::count(staticInitAllocas, allocaInst) == 0);
+#endif
+ staticAllocas.push_back(allocaInst);
+ }
+ }
+ }
+
+ if (!terminator || !terminator->isTerminator() ||
+ terminator->getNumSuccessors() == 0)
+ continue;
+
+ for (unsigned i = 0; i < terminator->getNumSuccessors(); ++i) {
+ llvm::BasicBlock *successor = terminator->getSuccessor(i);
+
+ if (!seenBlocks.contains(successor)) {
+ inlinedBlocks.push_back(successor);
+ seenBlocks.insert(successor);
+ }
+ }
+ }
+
+ for (llvm::Instruction *staticAlloca : staticAllocas)
+ staticAlloca->moveBefore(allocaIP.getPoint());
+}
+
static llvm::Error
initPrivateVars(llvm::IRBuilderBase &builder,
LLVM::ModuleTranslation &moduleTranslation,
PrivateVarsInfo &privateVarsInfo,
+ const llvm::OpenMPIRBuilder::InsertPointTy &allocaIP,
llvm::DenseMap<Value, Value> *mappedPrivateVars = nullptr) {
if (privateVarsInfo.blockArgs.empty())
return llvm::Error::success();
@@ -1624,6 +1678,8 @@ initPrivateVars(llvm::IRBuilderBase &builder,
setInsertPointForPossiblyEmptyBlock(builder);
}
+ hoistStaticAllocasToAllocaIP(privInitBlock, allocaIP);
+
return llvm::Error::success();
}
@@ -2575,7 +2631,8 @@ convertOmpWsloop(Operation &opInst, llvm::IRBuilderBase &builder,
deferredStores, isByRef)))
return failure();
- if (handleError(initPrivateVars(builder, moduleTranslation, privateVarsInfo),
+ if (handleError(initPrivateVars(builder, moduleTranslation, privateVarsInfo,
+ allocaIP),
opInst)
.failed())
return failure();
@@ -2764,9 +2821,9 @@ convertOmpParallel(omp::ParallelOp opInst, llvm::IRBuilderBase &builder,
assert(afterAllocas.get()->getSinglePredecessor());
builder.restoreIP(codeGenIP);
- if (handleError(
- initPrivateVars(builder, moduleTranslation, privateVarsInfo),
- *opInst)
+ if (handleError(initPrivateVars(builder, moduleTranslation, privateVarsInfo,
+ allocaIP),
+ *opInst)
.failed())
return llvm::make_error<PreviouslyReportedError>();
@@ -2967,7 +3024,7 @@ convertOmpSimd(Operation &opInst, llvm::IRBuilderBase &builder,
deferredStores, isByRef)))
return failure();
- if (handleError(initPrivateVars(builder, moduleTranslation, privateVarsInfo),
+ if (handleError(initPrivateVars(builder, moduleTranslation, privateVarsInfo, allocaIP),
opInst)
.failed())
return failure();
@@ -5288,8 +5345,9 @@ convertOmpDistribute(Operation &opInst, llvm::IRBuilderBase &builder,
if (handleError(afterAllocas, opInst).failed())
return llvm::make_error<PreviouslyReportedError>();
- if (handleError(initPrivateVars(builder, moduleTranslation, privVarsInfo),
- opInst)
+ if (handleError(
+ initPrivateVars(builder, moduleTranslation, privVarsInfo, allocaIP),
+ opInst)
.failed())
return llvm::make_error<PreviouslyReportedError>();
@@ -6090,7 +6148,7 @@ convertOmpTarget(Operation &opInst, llvm::IRBuilderBase &builder,
builder.restoreIP(codeGenIP);
if (handleError(initPrivateVars(builder, moduleTranslation, privateVarsInfo,
- &mappedPrivateVars),
+ allocaIP, &mappedPrivateVars),
*targetOp)
.failed())
return llvm::make_error<PreviouslyReportedError>();
diff --git a/mlir/test/Target/LLVMIR/openmp-private-allloca-hoisting.mlir b/mlir/test/Target/LLVMIR/openmp-private-allloca-hoisting.mlir
new file mode 100644
index 0000000000000..5371f2814085e
--- /dev/null
+++ b/mlir/test/Target/LLVMIR/openmp-private-allloca-hoisting.mlir
@@ -0,0 +1,79 @@
+// Tests that static alloca's in `omp.private ... init` regions are hoisted to
+// the parent construct's alloca IP.
+// RUN: mlir-translate -mlir-to-llvmir -split-input-file %s | FileCheck %s
+
+llvm.func @foo1()
+llvm.func @foo2()
+llvm.func @foo3()
+llvm.func @foo4()
+
+omp.private {type = private} @multi_block.privatizer : f32 init {
+^bb0(%arg0: !llvm.ptr, %arg1: !llvm.ptr):
+ %0 = llvm.mlir.constant(1 : i32) : i32
+ %alloca1 = llvm.alloca %0 x !llvm.struct<(i64)> {alignment = 8 : i64} : (i32) -> !llvm.ptr
+
+ %1 = llvm.load %arg0 : !llvm.ptr -> f32
+
+ %c1 = llvm.mlir.constant(1 : i32) : i32
+ %c2 = llvm.mlir.constant(2 : i32) : i32
+ %cond1 = llvm.icmp "eq" %c1, %c2 : i32
+ llvm.cond_br %cond1, ^bb1, ^bb2
+
+^bb1:
+ llvm.call @foo1() : () -> ()
+ llvm.br ^bb3
+
+^bb2:
+ llvm.call @foo2() : () -> ()
+ llvm.br ^bb3
+
+^bb3:
+ llvm.store %1, %arg1 : f32, !llvm.ptr
+
+ omp.yield(%arg1 : !llvm.ptr)
+}
+
+omp.private {type = private} @multi_block.privatizer2 : f32 init {
+^bb0(%arg0: !llvm.ptr, %arg1: !llvm.ptr):
+ %0 = llvm.mlir.constant(1 : i32) : i32
+ %alloca1 = llvm.alloca %0 x !llvm.struct<(ptr)> {alignment = 8 : i64} : (i32) -> !llvm.ptr
+
+ %1 = llvm.load %arg0 : !llvm.ptr -> f32
+
+ %c1 = llvm.mlir.constant(1 : i32) : i32
+ %c2 = llvm.mlir.constant(2 : i32) : i32
+ %cond1 = llvm.icmp "eq" %c1, %c2 : i32
+ llvm.cond_br %cond1, ^bb1, ^bb2
+
+^bb1:
+ llvm.call @foo3() : () -> ()
+ llvm.br ^bb3
+
+^bb2:
+ llvm.call @foo4() : () -> ()
+ llvm.br ^bb3
+
+^bb3:
+ llvm.store %1, %arg1 : f32, !llvm.ptr
+
+ omp.yield(%arg1 : !llvm.ptr)
+}
+
+llvm.func @parallel_op_private_multi_block(%arg0: !llvm.ptr, %arg1: !llvm.ptr) {
+ omp.parallel private(@multi_block.privatizer %arg0 -> %arg2,
+ @multi_block.privatizer2 %arg1 -> %arg3 : !llvm.ptr, !llvm.ptr) {
+ %0 = llvm.load %arg2 : !llvm.ptr -> f32
+ %1 = llvm.load %arg3 : !llvm.ptr -> f32
+ omp.terminator
+ }
+ llvm.return
+}
+
+// CHECK: define internal void @parallel_op_private_multi_block..omp_par({{.*}}) {{.*}} {
+// CHECK: omp.par.entry:
+// Varify that both allocas were hoisted to the parallel region's entry block.
+// CHECK: %{{.*}} = alloca { i64 }, align 8
+// CHECK-NEXT: %{{.*}} = alloca { ptr }, align 8
+// CHECK-NEXT: br label %omp.region.after_alloca
+// CHECK: omp.region.after_alloca:
+// CHECK: }
|
|
✅ With the latest revision this PR passed the C/C++ code formatter. |
873d0eb to
9e64fef
Compare
|
We probably need to do something similar to reductions as well. But this PR focuses on |
tblah
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix
|
|
||
| for (llvm::Instruction &inst : *curBlock) { | ||
| if (auto *allocaInst = mlir::dyn_cast<llvm::AllocaInst>(&inst)) { | ||
| if (!allocaInst->isArrayAllocation()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are array allocations special? I would have thought the case worth worrying about is array allocations which have a size determined dynamically - a statically sized array allocation should work okay.
Would multiple of these dynamically sized arrays still crash the GPU code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For descriptors, the problematic allocations are the allocations of the temporary descriptor structures used to initialize the private storage. All such allocations are static allocations because they are just structs.
The dynamic arrays are allocated only when the original value is allocated, so these allocations has to be maintained in the proper branch since we read the shape from the original value.
The problem when we have many descriptors is that:
- We inline the
initregion of descriptor number 1 which includes temp allocations + the if-else branch for initialization. - We do the same for descriptor number 2.
- .....
Because of that, such temp allocations are emitted between if-else branching.
I think the backend is smart enough to keep the dynamic allocations since they are obviously protected/tucked inside a branch while some of the static allocations after the branching joins again are problamtic since they are supposed to be uncoditional.
mlir/lib/Target/LLVMIR/Dialect/OpenMP/OpenMPToLLVMIRTranslation.cpp
Outdated
Show resolved
Hide resolved
…ns to the allocation IP of the construct Having more than 1 descritpr (allocatable or array) on the same `private` clause triggers a runtime crash on GPUs at the moment. For SPMD kernels, the issue happens because the initialization logic includes: * Allocating a number of temporary structs (these are emitted by flang when `fir` is lowered to `mlir.llvm`). * There is a conditional branch that determines whether we will allocate storage for the descriptor and initialize array bounds from the original descriptor or whether we will initialize the private descriptor to null. Because of these 2 things, temp allocations needed for descriptors beyond the 1st one are preceded by branching which causes the observed the runtime crash. This PR solves this issue by hoisting these static `alloca`s instructions to the suitable allca IP of the parent construct.
9e64fef to
c9c432f
Compare
Having more than 1 descritpr (allocatable or array) on the same
privateclause triggers a runtime crash on GPUs at the moment.For SPMD kernels, the issue happens because the initialization logic includes:
firis lowered tomlir.llvm).Because of these 2 things, temp allocations needed for descriptors beyond the 1st one are preceded by branching which causes the observed the runtime crash.
This PR solves this issue by hoisting these static
allocas instructions to the suitable allca IP of the parent construct.