Skip to content

Conversation

keshavvinayak01
Copy link
Contributor

@keshavvinayak01 keshavvinayak01 commented Sep 22, 2025

Description

Added Pattern for lowering Math::ClampFOp to ROCDL::FMED3.
Also added chipet option to MathToRocdl pass to check for arch support ISA instructions

Solves #15072

Signed-off-by: keshavvinayak01 <keshavvinayakjha@gmail.com>
Copy link

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@keshavvinayak01 keshavvinayak01 changed the title [WIP][ROCDL] ]Added math.clampf -> rocdl.fmed3 conversion{ [MLIR][ROCDL] ]Added math.clampf -> rocdl.fmed3 conversion{ Sep 23, 2025
@keshavvinayak01 keshavvinayak01 changed the title [MLIR][ROCDL] ]Added math.clampf -> rocdl.fmed3 conversion{ [MLIR][ROCDL] Added math.clampf -> rocdl.fmed3 conversion{ Sep 23, 2025
@keshavvinayak01 keshavvinayak01 marked this pull request as ready for review September 23, 2025 16:31
@keshavvinayak01
Copy link
Contributor Author

@Groverkss

@llvmbot llvmbot added the mlir label Sep 23, 2025
@llvmbot
Copy link
Member

llvmbot commented Sep 23, 2025

@llvm/pr-subscribers-mlir

Author: Keshav Vinayak Jha (keshavvinayak01)

Changes

Description

Added Pattern for lowering Math::ClampFOp to ROCDL::FMED3.
Also added chipet option to MathToRocdl pass to check for arch support ISA instructions

Solves #15072


Full diff: https://github.com/llvm/llvm-project/pull/160100.diff

3 Files Affected:

  • (modified) mlir/include/mlir/Conversion/Passes.td (+6)
  • (modified) mlir/lib/Conversion/MathToROCDL/MathToROCDL.cpp (+34-7)
  • (modified) mlir/test/Conversion/MathToROCDL/math-to-rocdl.mlir (+32-1)
diff --git a/mlir/include/mlir/Conversion/Passes.td b/mlir/include/mlir/Conversion/Passes.td
index 1a37d057776e2..060c7183fcb3a 100644
--- a/mlir/include/mlir/Conversion/Passes.td
+++ b/mlir/include/mlir/Conversion/Passes.td
@@ -785,6 +785,12 @@ def ConvertMathToROCDL : Pass<"convert-math-to-rocdl", "ModuleOp"> {
     "ROCDL::ROCDLDialect",
     "vector::VectorDialect",
   ];
+  let options = [
+    Option<"chipset", "chipset", "std::string",
+          /*default=*/"\"gfx000\"",
+          "Chipset that these operations will run on">
+  ];
+
 }
 
 //===----------------------------------------------------------------------===//
diff --git a/mlir/lib/Conversion/MathToROCDL/MathToROCDL.cpp b/mlir/lib/Conversion/MathToROCDL/MathToROCDL.cpp
index df219f3ff4f6e..6da7e9a850ef7 100644
--- a/mlir/lib/Conversion/MathToROCDL/MathToROCDL.cpp
+++ b/mlir/lib/Conversion/MathToROCDL/MathToROCDL.cpp
@@ -10,6 +10,7 @@
 #include "mlir/Conversion/GPUCommon/GPUCommonPass.h"
 #include "mlir/Conversion/LLVMCommon/LoweringOptions.h"
 #include "mlir/Conversion/LLVMCommon/TypeConverter.h"
+#include "mlir/Dialect/AMDGPU/Utils/Chipset.h"
 #include "mlir/Dialect/Func/IR/FuncOps.h"
 #include "mlir/Dialect/LLVMIR/LLVMDialect.h"
 #include "mlir/Dialect/LLVMIR/ROCDLDialect.h"
@@ -120,25 +121,51 @@ void mlir::populateMathToROCDLConversionPatterns(
                                     "__ocml_fmod_f64", "__ocml_fmod_f16");
 }
 
-namespace {
-struct ConvertMathToROCDLPass
-    : public impl::ConvertMathToROCDLBase<ConvertMathToROCDLPass> {
-  ConvertMathToROCDLPass() = default;
+struct ClampFOpConversion : public ConvertOpToLLVMPattern<math::ClampFOp> {
+  using ConvertOpToLLVMPattern::ConvertOpToLLVMPattern;
+  ClampFOpConversion(const LLVMTypeConverter &converter,
+                     amdgpu::Chipset chipset)
+      : ConvertOpToLLVMPattern<math::ClampFOp>(converter), chipset(chipset) {}
+
+  LogicalResult
+  matchAndRewrite(math::ClampFOp op, OpAdaptor adaptor,
+                  ConversionPatternRewriter &rewriter) const override {
+    // V_MED3_F16/F32 only exists in gfx9+ artchitectures
+    if (chipset.majorVersion < 9) {
+      std::string msg =
+          ("pre-gfx9 (gfx" + std::to_string(chipset.majorVersion) +
+           "): V_MED_F16 / V_MED3_F32 not supported.");
+      return rewriter.notifyMatchFailure(op, msg);
+    }
+    rewriter.replaceOpWithNewOp<ROCDL::FMed3Op>(op, op.getType(), op.getValue(),
+                                                op.getMin(), op.getMax());
+    return success();
+  }
+  amdgpu::Chipset chipset;
+};
+
+struct ConvertMathToROCDLPass final
+    : impl::ConvertMathToROCDLBase<ConvertMathToROCDLPass> {
+  using impl::ConvertMathToROCDLBase<
+      ConvertMathToROCDLPass>::ConvertMathToROCDLBase;
+
   void runOnOperation() override;
 };
-} // namespace
 
 void ConvertMathToROCDLPass::runOnOperation() {
   auto m = getOperation();
   MLIRContext *ctx = m.getContext();
+  FailureOr<amdgpu::Chipset> maybeChipset = amdgpu::Chipset::parse(chipset);
 
   RewritePatternSet patterns(&getContext());
   LowerToLLVMOptions options(ctx, DataLayout(m));
   LLVMTypeConverter converter(ctx, options);
+  patterns.add<ClampFOpConversion>(converter, *maybeChipset);
   populateMathToROCDLConversionPatterns(converter, patterns);
   ConversionTarget target(getContext());
-  target.addLegalDialect<BuiltinDialect, func::FuncDialect,
-                         vector::VectorDialect, LLVM::LLVMDialect>();
+  target
+      .addLegalDialect<BuiltinDialect, func::FuncDialect, vector::VectorDialect,
+                       LLVM::LLVMDialect, ROCDL::ROCDLDialect>();
   target.addIllegalOp<LLVM::CosOp, LLVM::ExpOp, LLVM::Exp2Op, LLVM::FAbsOp,
                       LLVM::FCeilOp, LLVM::FFloorOp, LLVM::FRemOp, LLVM::LogOp,
                       LLVM::Log10Op, LLVM::Log2Op, LLVM::PowOp, LLVM::SinOp,
diff --git a/mlir/test/Conversion/MathToROCDL/math-to-rocdl.mlir b/mlir/test/Conversion/MathToROCDL/math-to-rocdl.mlir
index dbff23339d8b3..541d8d53cac4c 100644
--- a/mlir/test/Conversion/MathToROCDL/math-to-rocdl.mlir
+++ b/mlir/test/Conversion/MathToROCDL/math-to-rocdl.mlir
@@ -1,4 +1,5 @@
-// RUN: mlir-opt %s -convert-math-to-rocdl -allow-unregistered-dialect -split-input-file | FileCheck %s
+// RUN: mlir-opt %s -allow-unregistered-dialect -split-input-file -pass-pipeline='builtin.module(convert-math-to-rocdl{chipset=gfx803})' | FileCheck %s --check-prefix=PRE9
+// RUN: mlir-opt %s -allow-unregistered-dialect -split-input-file -pass-pipeline='builtin.module(convert-math-to-rocdl{chipset=gfx942})' | FileCheck %s --check-prefix=POST9
 
 module @test_module {
   // CHECK: llvm.func @__ocml_fmod_f16(f16, f16) -> f16
@@ -596,3 +597,33 @@ module @test_module {
     func.return %result : vector<2x2xf16>
   }
 }
+
+// -----
+
+// f16 clamp → rocdl.fmed3 on gfx9+
+func.func @clampf_f16(%x: f16, %lo: f16, %hi: f16) -> f16 {
+  %r = math.clampf %x to [%lo, %hi] : f16
+  return %r : f16
+}
+
+// f32 clamp → rocdl.fmed3 on gfx9+
+func.func @clampf_f32(%x: f32, %lo: f32, %hi: f32) -> f32 {
+  %r = math.clampf %x to [%lo, %hi] : f32
+  return %r : f32
+}
+
+// POST9-LABEL: func.func @clampf_f16
+// POST9: rocdl.fmed3 {{.*}} : f16
+// POST9: return
+
+// POST9-LABEL: func.func @clampf_f32
+// POST9: rocdl.fmed3 {{.*}} : f32
+// POST9: return
+
+// PRE9-LABEL: func.func @clampf_f16
+// PRE9-NOT: rocdl.fmed3
+// PRE9: math.clampf {{.*}} : f16
+
+// PRE9-LABEL: func.func @clampf_f32
+// PRE9-NOT: rocdl.fmed3
+// PRE9: math.clampf {{.*}} : f32

@Groverkss Groverkss requested a review from krzysz00 September 26, 2025 14:59
Copy link
Member

@Groverkss Groverkss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but probably should get a review from @krzysz00

@keshavvinayak01
Copy link
Contributor Author

Ping

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants