-
Notifications
You must be signed in to change notification settings - Fork 14.1k
[mlir][spirv] Implement lowering gpu.subgroup_reduce
with cluster size for SPIRV
#141402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir][spirv] Implement lowering gpu.subgroup_reduce
with cluster size for SPIRV
#141402
Conversation
…r size for SPIRV
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be notified. If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers. If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
@llvm/pr-subscribers-mlir-spirv @llvm/pr-subscribers-mlir Author: Darren Wihandi (fairywreath) ChangesImplement lowering of Full diff: https://github.com/llvm/llvm-project/pull/141402.diff 2 Files Affected:
diff --git a/mlir/lib/Conversion/GPUToSPIRV/GPUToSPIRV.cpp b/mlir/lib/Conversion/GPUToSPIRV/GPUToSPIRV.cpp
index 3cc64b82950b5..f42605a6e8ce1 100644
--- a/mlir/lib/Conversion/GPUToSPIRV/GPUToSPIRV.cpp
+++ b/mlir/lib/Conversion/GPUToSPIRV/GPUToSPIRV.cpp
@@ -464,27 +464,39 @@ LogicalResult GPUShuffleConversion::matchAndRewrite(
template <typename UniformOp, typename NonUniformOp>
static Value createGroupReduceOpImpl(OpBuilder &builder, Location loc,
- Value arg, bool isGroup, bool isUniform) {
+ Value arg, bool isGroup, bool isUniform,
+ std::optional<uint32_t> clusterSize) {
Type type = arg.getType();
auto scope = mlir::spirv::ScopeAttr::get(builder.getContext(),
isGroup ? spirv::Scope::Workgroup
: spirv::Scope::Subgroup);
- auto groupOp = spirv::GroupOperationAttr::get(builder.getContext(),
- spirv::GroupOperation::Reduce);
+ auto groupOp = spirv::GroupOperationAttr::get(
+ builder.getContext(), clusterSize.has_value()
+ ? spirv::GroupOperation::ClusteredReduce
+ : spirv::GroupOperation::Reduce);
if (isUniform) {
return builder.create<UniformOp>(loc, type, scope, groupOp, arg)
.getResult();
}
- return builder.create<NonUniformOp>(loc, type, scope, groupOp, arg, Value{})
+
+ Value clusterSizeValue =
+ clusterSize.has_value()
+ ? builder.create<spirv::ConstantOp>(
+ loc, builder.getI32Type(),
+ builder.getIntegerAttr(builder.getI32Type(), *clusterSize))
+ : Value{};
+ return builder
+ .create<NonUniformOp>(loc, type, scope, groupOp, arg, clusterSizeValue)
.getResult();
}
-static std::optional<Value> createGroupReduceOp(OpBuilder &builder,
- Location loc, Value arg,
- gpu::AllReduceOperation opType,
- bool isGroup, bool isUniform) {
+static std::optional<Value>
+createGroupReduceOp(OpBuilder &builder, Location loc, Value arg,
+ gpu::AllReduceOperation opType, bool isGroup,
+ bool isUniform, std::optional<uint32_t> clusterSize) {
enum class ElemType { Float, Boolean, Integer };
- using FuncT = Value (*)(OpBuilder &, Location, Value, bool, bool);
+ using FuncT = Value (*)(OpBuilder &, Location, Value, bool, bool,
+ std::optional<uint32_t>);
struct OpHandler {
gpu::AllReduceOperation kind;
ElemType elemType;
@@ -548,7 +560,7 @@ static std::optional<Value> createGroupReduceOp(OpBuilder &builder,
for (const OpHandler &handler : handlers)
if (handler.kind == opType && elementType == handler.elemType)
- return handler.func(builder, loc, arg, isGroup, isUniform);
+ return handler.func(builder, loc, arg, isGroup, isUniform, clusterSize);
return std::nullopt;
}
@@ -571,7 +583,7 @@ class GPUAllReduceConversion final
auto result =
createGroupReduceOp(rewriter, op.getLoc(), adaptor.getValue(), *opType,
- /*isGroup*/ true, op.getUniform());
+ /*isGroup*/ true, op.getUniform(), std::nullopt);
if (!result)
return failure();
@@ -589,16 +601,17 @@ class GPUSubgroupReduceConversion final
LogicalResult
matchAndRewrite(gpu::SubgroupReduceOp op, OpAdaptor adaptor,
ConversionPatternRewriter &rewriter) const override {
- if (op.getClusterSize())
+ if (op.getClusterStride() > 1) {
return rewriter.notifyMatchFailure(
- op, "lowering for clustered reduce not implemented");
+ op, "lowering for cluster stride > 1 is not implemented");
+ }
if (!isa<spirv::ScalarType>(adaptor.getValue().getType()))
return rewriter.notifyMatchFailure(op, "reduction type is not a scalar");
- auto result = createGroupReduceOp(rewriter, op.getLoc(), adaptor.getValue(),
- adaptor.getOp(),
- /*isGroup=*/false, adaptor.getUniform());
+ auto result = createGroupReduceOp(
+ rewriter, op.getLoc(), adaptor.getValue(), adaptor.getOp(),
+ /*isGroup=*/false, adaptor.getUniform(), op.getClusterSize());
if (!result)
return failure();
diff --git a/mlir/test/Conversion/GPUToSPIRV/reductions.mlir b/mlir/test/Conversion/GPUToSPIRV/reductions.mlir
index ae834b9915d50..08d9b094a5303 100644
--- a/mlir/test/Conversion/GPUToSPIRV/reductions.mlir
+++ b/mlir/test/Conversion/GPUToSPIRV/reductions.mlir
@@ -789,3 +789,44 @@ gpu.module @kernels {
}
}
}
+
+// -----
+
+module attributes {
+ gpu.container_module,
+ spirv.target_env = #spirv.target_env<#spirv.vce<v1.3, [Kernel, Addresses, Groups, GroupUniformArithmeticKHR, GroupNonUniformClustered], []>, #spirv.resource_limits<>>
+} {
+
+gpu.module @kernels {
+ // CHECK-LABEL: spirv.func @test
+ // CHECK-SAME: (%[[ARG:.*]]: f32)
+ // CHECK: %[[CLUSTER_SIZE:.*]] = spirv.Constant 8 : i32
+ gpu.func @test22(%arg : f32) kernel
+ attributes {spirv.entry_point_abi = #spirv.entry_point_abi<workgroup_size = [16, 1, 1]>} {
+ // CHECK: %{{.*}} = spirv.GroupNonUniformFAdd <Subgroup> <ClusteredReduce> %[[ARG]] cluster_size(%[[CLUSTER_SIZE]]) : f32, i32 -> f32
+ %reduced = gpu.subgroup_reduce add %arg cluster(size = 8) : (f32) -> (f32)
+ gpu.return
+ }
+}
+
+}
+
+// -----
+
+// Subgrop reduce with cluster stride > 1 is not yet supported.
+
+module attributes {
+ gpu.container_module,
+ spirv.target_env = #spirv.target_env<#spirv.vce<v1.3, [Kernel, Addresses, Groups, GroupUniformArithmeticKHR, GroupNonUniformClustered], []>, #spirv.resource_limits<>>
+} {
+
+gpu.module @kernels {
+ gpu.func @test22(%arg : f32) kernel
+ attributes {spirv.entry_point_abi = #spirv.entry_point_abi<workgroup_size = [16, 1, 1]>} {
+ // expected-error @+1 {{failed to legalize operation 'gpu.subgroup_reduce'}}
+ %reduced = gpu.subgroup_reduce add %arg cluster(size = 8, stride = 2) : (f32) -> (f32)
+ gpu.return
+ }
+}
+
+}
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thanks for adding this
return builder.create<NonUniformOp>(loc, type, scope, groupOp, arg, Value{}) | ||
|
||
Value clusterSizeValue = {}; | ||
if (clusterSize.has_value()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any preference on whether to use has_value()
or using the variable directly as a bool for std::optional
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either way is fine IMO
return builder.create<NonUniformOp>(loc, type, scope, groupOp, arg, Value{}) | ||
|
||
Value clusterSizeValue = {}; | ||
if (clusterSize.has_value()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either way is fine IMO
@fairywreath Congratulations on having your first Pull Request (PR) merged into the LLVM Project! Your changes will be combined with recent changes from other authors, then tested by our build bots. If there is a problem with a build, you may receive a report in an email or a comment on this PR. Please check whether problems have been caused by your change specifically, as the builds can include changes from many authors. It is not uncommon for your change to be included in a build that fails due to someone else's changes, or infrastructure issues. How to do this, and the rest of the post-merge process, is covered in detail here. If your change does cause a problem, it may be reverted, or you can revert it yourself. This is a normal part of LLVM development. You can fix your changes and open a new PR to merge them again. If you don't get any reports, no action is required from you. Your changes are working as expected, well done! |
…ize for SPIRV (llvm#141402) Implement lowering of `gpu.subgroup_reduce` with a cluster size attribute to SPIRV by using the `ClusteredReduce` group operation.
…ize for SPIRV (llvm#141402) Implement lowering of `gpu.subgroup_reduce` with a cluster size attribute to SPIRV by using the `ClusteredReduce` group operation.
…ize for SPIRV (llvm#141402) Implement lowering of `gpu.subgroup_reduce` with a cluster size attribute to SPIRV by using the `ClusteredReduce` group operation.
Implement lowering of
gpu.subgroup_reduce
with a cluster size attribute to SPIRV by using theClusteredReduce
group operation.