Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RISCV] Add support for experimental Zimop extension #74824

Closed
wants to merge 1 commit into from

Conversation

JivanH
Copy link
Contributor

@JivanH JivanH commented Dec 8, 2023

This implements experimental support for the Zimop extension as specified here:
https://github.com/riscv/riscv-isa-manual/blob/main/src/zimop.adoc.

This change adds intrinsics of mop.r.[n] and mop.rr.[n] instructions for Zimop extension based on
https://github.com/riscv-non-isa/riscv-c-api-doc/blob/master/riscv-c-api.md.

 This implements experimental support for the Zimop extension as specified here: https://github.com/riscv/riscv-isa-manual/blob/main/src/zimop.adoc.

This change adds IR intrinsics of mop.r.[n] and mop.rr.[n] instructions for Zimop extension based on https://github.com/riscv-non-isa/riscv-c-api-doc/blob/master/riscv-c-api.md. Also added assembly support.
@llvmbot llvmbot added clang Clang issues not falling into any other category backend:RISC-V clang:frontend Language frontend issues, e.g. anything involving "Sema" clang:codegen mc Machine (object) code llvm:support llvm:ir labels Dec 8, 2023
@llvmbot
Copy link
Collaborator

llvmbot commented Dec 8, 2023

@llvm/pr-subscribers-backend-risc-v
@llvm/pr-subscribers-mc
@llvm/pr-subscribers-clang

@llvm/pr-subscribers-llvm-ir

Author: Jivan Hakobyan (JivanH)

Changes

This implements experimental support for the Zimop extension as specified here:
https://github.com/riscv/riscv-isa-manual/blob/main/src/zimop.adoc.

This change adds intrinsics of mop.r.[n] and mop.rr.[n] instructions for Zimop extension based on
https://github.com/riscv-non-isa/riscv-c-api-doc/blob/master/riscv-c-api.md.


Patch is 39.04 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/74824.diff

22 Files Affected:

  • (modified) clang/include/clang/Basic/BuiltinsRISCV.def (+5)
  • (modified) clang/lib/CodeGen/CGBuiltin.cpp (+34)
  • (modified) clang/lib/Sema/SemaChecking.cpp (+8)
  • (added) clang/test/CodeGen/RISCV/rvb-intrinsics/zimop.c (+104)
  • (modified) llvm/docs/RISCVUsage.rst (+3)
  • (modified) llvm/include/llvm/IR/IntrinsicsRISCV.td (+23)
  • (modified) llvm/lib/Support/RISCVISAInfo.cpp (+2)
  • (modified) llvm/lib/Target/RISCV/RISCVFeatures.td (+5)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+171)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.h (+6)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrFormats.td (+21)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrInfo.td (+53)
  • (modified) llvm/lib/Target/RISCV/RISCVSchedRocket.td (+1)
  • (modified) llvm/lib/Target/RISCV/RISCVSchedSiFive7.td (+1)
  • (modified) llvm/lib/Target/RISCV/RISCVSchedSyntacoreSCR1.td (+1)
  • (modified) llvm/lib/Target/RISCV/RISCVSchedule.td (+14)
  • (modified) llvm/test/CodeGen/RISCV/attributes.ll (+4)
  • (added) llvm/test/CodeGen/RISCV/rv32zimop-intrinsic.ll (+47)
  • (added) llvm/test/CodeGen/RISCV/rv64zimop-intrinsic.ll (+96)
  • (added) llvm/test/MC/RISCV/rv32zimop-invalid.s (+6)
  • (added) llvm/test/MC/RISCV/rvzimop-valid.s (+26)
  • (modified) llvm/unittests/Support/RISCVISAInfoTest.cpp (+1)
diff --git a/clang/include/clang/Basic/BuiltinsRISCV.def b/clang/include/clang/Basic/BuiltinsRISCV.def
index 1528b18c82ead..6ba5288f9cbd1 100644
--- a/clang/include/clang/Basic/BuiltinsRISCV.def
+++ b/clang/include/clang/Basic/BuiltinsRISCV.def
@@ -89,5 +89,10 @@ TARGET_BUILTIN(__builtin_riscv_sm3p1, "UiUi", "nc", "zksh")
 TARGET_BUILTIN(__builtin_riscv_ntl_load, "v.", "t", "zihintntl")
 TARGET_BUILTIN(__builtin_riscv_ntl_store, "v.", "t", "zihintntl")
 
+TARGET_BUILTIN(__builtin_riscv_mopr_32, "UiUiUi", "nc", "experimental-zimop")
+TARGET_BUILTIN(__builtin_riscv_mopr_64, "UWiUWiUWi", "nc", "experimental-zimop,64bit")
+TARGET_BUILTIN(__builtin_riscv_moprr_32, "UiUiUiUi", "nc", "experimental-zimop")
+TARGET_BUILTIN(__builtin_riscv_moprr_64, "UWiUWiUWiUWi", "nc", "experimental-zimop,64bit")
+
 #undef BUILTIN
 #undef TARGET_BUILTIN
diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 0d8b3e4aaad47..11ba665dda938 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -20808,6 +20808,10 @@ Value *CodeGenFunction::EmitRISCVBuiltinExpr(unsigned BuiltinID,
   case RISCV::BI__builtin_riscv_clz_64:
   case RISCV::BI__builtin_riscv_ctz_32:
   case RISCV::BI__builtin_riscv_ctz_64:
+  case RISCV::BI__builtin_riscv_mopr_32:
+  case RISCV::BI__builtin_riscv_mopr_64:
+  case RISCV::BI__builtin_riscv_moprr_32:
+  case RISCV::BI__builtin_riscv_moprr_64:
   case RISCV::BI__builtin_riscv_clmul_32:
   case RISCV::BI__builtin_riscv_clmul_64:
   case RISCV::BI__builtin_riscv_clmulh_32:
@@ -20848,6 +20852,36 @@ Value *CodeGenFunction::EmitRISCVBuiltinExpr(unsigned BuiltinID,
       return Result;
     }
 
+    // Zimop
+    case RISCV::BI__builtin_riscv_mopr_32:
+    case RISCV::BI__builtin_riscv_mopr_64: {
+      unsigned N = cast<ConstantInt>(Ops[1])->getZExtValue();
+      Function *F = nullptr;
+      if (N <= 1) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr0 + N, {ResultType});
+      } else if (N >= 10 && N <= 19) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr10 + N - 10, {ResultType});
+      } else if (N == 2) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr2, {ResultType});
+      } else if (N >= 20 && N <= 29) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr20 + N - 20, {ResultType});
+      } else if (N == 3) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr3, {ResultType});
+      } else if (N >= 30 && N <= 31) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr30 + N - 30, {ResultType});
+      } else if (N >= 4 && N <= 9) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr4 + N - 4, {ResultType});
+      } else {
+        llvm_unreachable("unexpected builtin ID");
+      }
+      return Builder.CreateCall(F, {Ops[0]}, "");
+    }
+    case RISCV::BI__builtin_riscv_moprr_32:
+    case RISCV::BI__builtin_riscv_moprr_64: {
+      unsigned N = cast<ConstantInt>(Ops[2])->getZExtValue();
+      Function *F = CGM.getIntrinsic(Intrinsic::riscv_moprr0 + N, {ResultType});
+      return Builder.CreateCall(F, {Ops[0], Ops[1]}, "");
+    }
     // Zbc
     case RISCV::BI__builtin_riscv_clmul_32:
     case RISCV::BI__builtin_riscv_clmul_64:
diff --git a/clang/lib/Sema/SemaChecking.cpp b/clang/lib/Sema/SemaChecking.cpp
index fc6ee6b2c5ab4..80ca886bda6ec 100644
--- a/clang/lib/Sema/SemaChecking.cpp
+++ b/clang/lib/Sema/SemaChecking.cpp
@@ -5468,6 +5468,14 @@ bool Sema::CheckRISCVBuiltinFunctionCall(const TargetInfo &TI,
   // Check if rnum is in [0, 10]
   case RISCV::BI__builtin_riscv_aes64ks1i:
     return SemaBuiltinConstantArgRange(TheCall, 1, 0, 10);
+  // Check if n of mop.r.[n] is in [0, 31]
+  case RISCV::BI__builtin_riscv_mopr_32:
+  case RISCV::BI__builtin_riscv_mopr_64:
+    return SemaBuiltinConstantArgRange(TheCall, 1, 0, 31);
+  // Check if n of mop.rr.[n] is in [0, 7]
+  case RISCV::BI__builtin_riscv_moprr_32:
+  case RISCV::BI__builtin_riscv_moprr_64:
+    return SemaBuiltinConstantArgRange(TheCall, 2, 0, 7);
   // Check if value range for vxrm is in [0, 3]
   case RISCVVector::BI__builtin_rvv_vaaddu_vv:
   case RISCVVector::BI__builtin_rvv_vaaddu_vx:
diff --git a/clang/test/CodeGen/RISCV/rvb-intrinsics/zimop.c b/clang/test/CodeGen/RISCV/rvb-intrinsics/zimop.c
new file mode 100644
index 0000000000000..790c746f84606
--- /dev/null
+++ b/clang/test/CodeGen/RISCV/rvb-intrinsics/zimop.c
@@ -0,0 +1,104 @@
+// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py
+// RUN: %clang_cc1 -triple riscv32 -target-feature +experimental-zimop -emit-llvm %s -o - \
+// RUN:     -disable-O0-optnone | opt -S -passes=mem2reg \
+// RUN:     | FileCheck %s  -check-prefix=RV32ZIMOP
+// RUN: %clang_cc1 -triple riscv64 -target-feature +experimental-zimop -emit-llvm %s -o - \
+// RUN:     -disable-O0-optnone | opt -S -passes=mem2reg \
+// RUN:     | FileCheck %s  -check-prefix=RV64ZIMOP
+
+#include <stdint.h>
+
+#if __riscv_xlen == 64
+// RV64ZIMOP-LABEL: @mopr_0_64(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i64 @llvm.riscv.mopr0.i64(i64 [[A:%.*]])
+// RV64ZIMOP-NEXT:    ret i64 [[TMP0]]
+//
+uint64_t mopr_0_64(uint64_t a) {
+  return __builtin_riscv_mopr_64(a, 0);
+}
+
+// RV64ZIMOP-LABEL: @mopr_31_64(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i64 @llvm.riscv.mopr31.i64(i64 [[A:%.*]])
+// RV64ZIMOP-NEXT:    ret i64 [[TMP0]]
+//
+uint64_t mopr_31_64(uint64_t a) {
+  return __builtin_riscv_mopr_64(a, 31);
+}
+
+// RV64ZIMOP-LABEL: @moprr_0_64(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i64 @llvm.riscv.moprr0.i64(i64 [[A:%.*]], i64 [[B:%.*]])
+// RV64ZIMOP-NEXT:    ret i64 [[TMP0]]
+//
+uint64_t moprr_0_64(uint64_t a, uint64_t b) {
+  return __builtin_riscv_moprr_64(a, b, 0);
+}
+
+// RV64ZIMOP-LABEL: @moprr_7_64(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i64 @llvm.riscv.moprr7.i64(i64 [[A:%.*]], i64 [[B:%.*]])
+// RV64ZIMOP-NEXT:    ret i64 [[TMP0]]
+//
+uint64_t moprr_7_64(uint64_t a, uint64_t b) {
+  return __builtin_riscv_moprr_64(a, b, 7);
+}
+
+#endif
+
+// RV32ZIMOP-LABEL: @mopr_0_32(
+// RV32ZIMOP-NEXT:  entry:
+// RV32ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.mopr0.i32(i32 [[A:%.*]])
+// RV32ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+// RV64ZIMOP-LABEL: @mopr_0_32(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.mopr0.i32(i32 [[A:%.*]])
+// RV64ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+uint32_t mopr_0_32(uint32_t a) {
+  return __builtin_riscv_mopr_32(a, 0);
+}
+
+// RV32ZIMOP-LABEL: @mopr_31_32(
+// RV32ZIMOP-NEXT:  entry:
+// RV32ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.mopr31.i32(i32 [[A:%.*]])
+// RV32ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+// RV64ZIMOP-LABEL: @mopr_31_32(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.mopr31.i32(i32 [[A:%.*]])
+// RV64ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+uint32_t mopr_31_32(uint32_t a) {
+  return __builtin_riscv_mopr_32(a, 31);
+}
+
+// RV32ZIMOP-LABEL: @moprr_0_32(
+// RV32ZIMOP-NEXT:  entry:
+// RV32ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.moprr0.i32(i32 [[A:%.*]], i32 [[B:%.*]])
+// RV32ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+// RV64ZIMOP-LABEL: @moprr_0_32(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.moprr0.i32(i32 [[A:%.*]], i32 [[B:%.*]])
+// RV64ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+uint32_t moprr_0_32(uint32_t a, uint32_t b) {
+  return __builtin_riscv_moprr_32(a, b, 0);
+}
+
+// RV32ZIMOP-LABEL: @moprr_7_32(
+// RV32ZIMOP-NEXT:  entry:
+// RV32ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.moprr7.i32(i32 [[A:%.*]], i32 [[B:%.*]])
+// RV32ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+// RV64ZIMOP-LABEL: @moprr_7_32(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.moprr7.i32(i32 [[A:%.*]], i32 [[B:%.*]])
+// RV64ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+uint32_t moprr_7_32(uint32_t a, uint32_t b) {
+  return __builtin_riscv_moprr_32(a, b, 7);
+}
\ No newline at end of file
diff --git a/llvm/docs/RISCVUsage.rst b/llvm/docs/RISCVUsage.rst
index 65dd0d83448ed..bd2f81fba186d 100644
--- a/llvm/docs/RISCVUsage.rst
+++ b/llvm/docs/RISCVUsage.rst
@@ -208,6 +208,9 @@ The primary goal of experimental support is to assist in the process of ratifica
 ``experimental-zvbb``, ``experimental-zvbc``, ``experimental-zvkb``, ``experimental-zvkg``, ``experimental-zvkn``, ``experimental-zvknc``, ``experimental-zvkned``, ``experimental-zvkng``, ``experimental-zvknha``, ``experimental-zvknhb``, ``experimental-zvks``, ``experimental-zvksc``, ``experimental-zvksed``, ``experimental-zvksg``, ``experimental-zvksh``, ``experimental-zvkt``
   LLVM implements the `1.0.0-rc2 specification <https://github.com/riscv/riscv-crypto/releases/download/v/riscv-crypto-spec-vector.pdf>`__. Note that current vector crypto extension version can be found in: <https://github.com/riscv/riscv-crypto>.
 
+``experimental-zimop``
+  LLVM implements the `v0.1 proposed specification <https://github.com/riscv/riscv-isa-manual/blob/main/src/zimop.adoc>`__.
+
 To use an experimental extension from `clang`, you must add `-menable-experimental-extensions` to the command line, and specify the exact version of the experimental extension you are using.  To use an experimental extension with LLVM's internal developer tools (e.g. `llc`, `llvm-objdump`, `llvm-mc`), you must prefix the extension name with `experimental-`.  Note that you don't need to specify the version with internal tools, and shouldn't include the `experimental-` prefix with `clang`.
 
 Vendor Extensions
diff --git a/llvm/include/llvm/IR/IntrinsicsRISCV.td b/llvm/include/llvm/IR/IntrinsicsRISCV.td
index 20c6a525a86ba..fcb11c8c51398 100644
--- a/llvm/include/llvm/IR/IntrinsicsRISCV.td
+++ b/llvm/include/llvm/IR/IntrinsicsRISCV.td
@@ -108,6 +108,29 @@ let TargetPrefix = "riscv" in {
   def int_riscv_xperm8  : BitManipGPRGPRIntrinsics;
 } // TargetPrefix = "riscv"
 
+//===----------------------------------------------------------------------===//
+// May-Be-Operations
+
+let TargetPrefix = "riscv" in {
+
+  class MOPGPRIntrinsics
+      : DefaultAttrsIntrinsic<[llvm_any_ty],
+                              [LLVMMatchType<0>],
+                              [IntrNoMem, IntrSpeculatable]>;
+  class MOPGPRGPRIntrinsics
+      : DefaultAttrsIntrinsic<[llvm_any_ty],
+                              [LLVMMatchType<0>, LLVMMatchType<0>],
+                              [IntrNoMem, IntrSpeculatable]>;
+
+  // Zimop
+   foreach i = 0...31 in {
+    def int_riscv_mopr#i : MOPGPRIntrinsics;
+   }
+  foreach i = 0...7 in {
+    def int_riscv_moprr#i : MOPGPRGPRIntrinsics;
+  }
+} // TargetPrefix = "riscv"
+
 //===----------------------------------------------------------------------===//
 // Vectors
 
diff --git a/llvm/lib/Support/RISCVISAInfo.cpp b/llvm/lib/Support/RISCVISAInfo.cpp
index 6322748430063..1b303ba1e9431 100644
--- a/llvm/lib/Support/RISCVISAInfo.cpp
+++ b/llvm/lib/Support/RISCVISAInfo.cpp
@@ -177,6 +177,8 @@ static const RISCVSupportedExtension SupportedExperimentalExtensions[] = {
     {"zicfilp", RISCVExtensionVersion{0, 2}},
     {"zicond", RISCVExtensionVersion{1, 0}},
 
+    {"zimop", RISCVExtensionVersion{0, 1}},
+
     {"ztso", RISCVExtensionVersion{0, 1}},
 
     {"zvbb", RISCVExtensionVersion{1, 0}},
diff --git a/llvm/lib/Target/RISCV/RISCVFeatures.td b/llvm/lib/Target/RISCV/RISCVFeatures.td
index 7d142d38d0f9a..eddb7c33627f0 100644
--- a/llvm/lib/Target/RISCV/RISCVFeatures.td
+++ b/llvm/lib/Target/RISCV/RISCVFeatures.td
@@ -687,6 +687,11 @@ def HasStdExtZicond : Predicate<"Subtarget->hasStdExtZicond()">,
                                 AssemblerPredicate<(all_of FeatureStdExtZicond),
                                 "'Zicond' (Integer Conditional Operations)">;
 
+def FeatureStdExtZimop : SubtargetFeature<"experimental-zimop", "HasStdExtZimop", "true",
+                                          "'Zimop' (May-Be-Operations)">;
+def HasStdExtZimop : Predicate<"Subtarget->hasStdExtZimop()">,
+                               AssemblerPredicate<(all_of FeatureStdExtZimop),
+                               "'Zimop' (May-Be-Operations)">;
 def FeatureStdExtSmaia
     : SubtargetFeature<"smaia", "HasStdExtSmaia", "true",
                        "'Smaia' (Smaia encompasses all added CSRs and all "
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index f2ec422b54a92..45fbea2088559 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -8367,6 +8367,73 @@ SDValue RISCVTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
         IntNo == Intrinsic::riscv_zip ? RISCVISD::ZIP : RISCVISD::UNZIP;
     return DAG.getNode(Opc, DL, XLenVT, Op.getOperand(1));
   }
+#define RISCV_MOPR_64_CASE(NAME, OPCODE)                                       \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (RV64LegalI32 && Subtarget.is64Bit() &&                                 \
+        Op.getValueType() == MVT::i32) {                                       \
+      SDValue NewOp =                                                          \
+          DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, Op.getOperand(1));        \
+      SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp);                  \
+      return DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res);                    \
+    }                                                                          \
+    return DAG.getNode(OPCODE, DL, XLenVT, Op.getOperand(1));                  \
+  }
+    RISCV_MOPR_64_CASE(mopr0, RISCVISD::MOPR0)
+    RISCV_MOPR_64_CASE(mopr1, RISCVISD::MOPR1)
+    RISCV_MOPR_64_CASE(mopr2, RISCVISD::MOPR2)
+    RISCV_MOPR_64_CASE(mopr3, RISCVISD::MOPR3)
+    RISCV_MOPR_64_CASE(mopr4, RISCVISD::MOPR4)
+    RISCV_MOPR_64_CASE(mopr5, RISCVISD::MOPR5)
+    RISCV_MOPR_64_CASE(mopr6, RISCVISD::MOPR6)
+    RISCV_MOPR_64_CASE(mopr7, RISCVISD::MOPR7)
+    RISCV_MOPR_64_CASE(mopr8, RISCVISD::MOPR8)
+    RISCV_MOPR_64_CASE(mopr9, RISCVISD::MOPR9)
+    RISCV_MOPR_64_CASE(mopr10, RISCVISD::MOPR10)
+    RISCV_MOPR_64_CASE(mopr11, RISCVISD::MOPR11)
+    RISCV_MOPR_64_CASE(mopr12, RISCVISD::MOPR12)
+    RISCV_MOPR_64_CASE(mopr13, RISCVISD::MOPR13)
+    RISCV_MOPR_64_CASE(mopr14, RISCVISD::MOPR14)
+    RISCV_MOPR_64_CASE(mopr15, RISCVISD::MOPR15)
+    RISCV_MOPR_64_CASE(mopr16, RISCVISD::MOPR16)
+    RISCV_MOPR_64_CASE(mopr17, RISCVISD::MOPR17)
+    RISCV_MOPR_64_CASE(mopr18, RISCVISD::MOPR18)
+    RISCV_MOPR_64_CASE(mopr19, RISCVISD::MOPR19)
+    RISCV_MOPR_64_CASE(mopr20, RISCVISD::MOPR20)
+    RISCV_MOPR_64_CASE(mopr21, RISCVISD::MOPR21)
+    RISCV_MOPR_64_CASE(mopr22, RISCVISD::MOPR22)
+    RISCV_MOPR_64_CASE(mopr23, RISCVISD::MOPR23)
+    RISCV_MOPR_64_CASE(mopr24, RISCVISD::MOPR24)
+    RISCV_MOPR_64_CASE(mopr25, RISCVISD::MOPR25)
+    RISCV_MOPR_64_CASE(mopr26, RISCVISD::MOPR26)
+    RISCV_MOPR_64_CASE(mopr27, RISCVISD::MOPR27)
+    RISCV_MOPR_64_CASE(mopr28, RISCVISD::MOPR28)
+    RISCV_MOPR_64_CASE(mopr29, RISCVISD::MOPR29)
+    RISCV_MOPR_64_CASE(mopr30, RISCVISD::MOPR30)
+    RISCV_MOPR_64_CASE(mopr31, RISCVISD::MOPR31)
+#undef RISCV_MOPR_64_CASE
+#define RISCV_MOPRR_64_CASE(NAME, OPCODE)                                      \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (RV64LegalI32 && Subtarget.is64Bit() &&                                 \
+        Op.getValueType() == MVT::i32) {                                       \
+      SDValue NewOp0 =                                                         \
+          DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, Op.getOperand(1));        \
+      SDValue NewOp1 =                                                         \
+          DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, Op.getOperand(2));        \
+      SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp0, NewOp1);         \
+      return DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res);                    \
+    }                                                                          \
+    return DAG.getNode(OPCODE, DL, XLenVT, Op.getOperand(1),                   \
+                       Op.getOperand(2));                                      \
+  }
+    RISCV_MOPRR_64_CASE(moprr0, RISCVISD::MOPRR0)
+    RISCV_MOPRR_64_CASE(moprr1, RISCVISD::MOPRR1)
+    RISCV_MOPRR_64_CASE(moprr2, RISCVISD::MOPRR2)
+    RISCV_MOPRR_64_CASE(moprr3, RISCVISD::MOPRR3)
+    RISCV_MOPRR_64_CASE(moprr4, RISCVISD::MOPRR4)
+    RISCV_MOPRR_64_CASE(moprr5, RISCVISD::MOPRR5)
+    RISCV_MOPRR_64_CASE(moprr6, RISCVISD::MOPRR6)
+    RISCV_MOPRR_64_CASE(moprr7, RISCVISD::MOPRR7)
+#undef RISCV_MOPRR_64_CASE
   case Intrinsic::riscv_clmul:
     if (RV64LegalI32 && Subtarget.is64Bit() && Op.getValueType() == MVT::i32) {
       SDValue NewOp0 =
@@ -11633,6 +11700,70 @@ void RISCVTargetLowering::ReplaceNodeResults(SDNode *N,
       Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res));
       return;
     }
+#define RISCV_MOPR_CASE(NAME, OPCODE)                                          \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (!Subtarget.is64Bit() || N->getValueType(0) != MVT::i32)                \
+      return;                                                                  \
+    SDValue NewOp =                                                            \
+        DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, N->getOperand(1));          \
+    SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp);                    \
+    Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res));          \
+    return;                                                                    \
+  }
+      RISCV_MOPR_CASE(mopr0, RISCVISD::MOPR0)
+      RISCV_MOPR_CASE(mopr1, RISCVISD::MOPR1)
+      RISCV_MOPR_CASE(mopr2, RISCVISD::MOPR2)
+      RISCV_MOPR_CASE(mopr3, RISCVISD::MOPR3)
+      RISCV_MOPR_CASE(mopr4, RISCVISD::MOPR4)
+      RISCV_MOPR_CASE(mopr5, RISCVISD::MOPR5)
+      RISCV_MOPR_CASE(mopr6, RISCVISD::MOPR6)
+      RISCV_MOPR_CASE(mopr7, RISCVISD::MOPR7)
+      RISCV_MOPR_CASE(mopr8, RISCVISD::MOPR8)
+      RISCV_MOPR_CASE(mopr9, RISCVISD::MOPR9)
+      RISCV_MOPR_CASE(mopr10, RISCVISD::MOPR10)
+      RISCV_MOPR_CASE(mopr11, RISCVISD::MOPR11)
+      RISCV_MOPR_CASE(mopr12, RISCVISD::MOPR12)
+      RISCV_MOPR_CASE(mopr13, RISCVISD::MOPR13)
+      RISCV_MOPR_CASE(mopr14, RISCVISD::MOPR14)
+      RISCV_MOPR_CASE(mopr15, RISCVISD::MOPR15)
+      RISCV_MOPR_CASE(mopr16, RISCVISD::MOPR16)
+      RISCV_MOPR_CASE(mopr17, RISCVISD::MOPR17)
+      RISCV_MOPR_CASE(mopr18, RISCVISD::MOPR18)
+      RISCV_MOPR_CASE(mopr19, RISCVISD::MOPR19)
+      RISCV_MOPR_CASE(mopr20, RISCVISD::MOPR20)
+      RISCV_MOPR_CASE(mopr21, RISCVISD::MOPR21)
+      RISCV_MOPR_CASE(mopr22, RISCVISD::MOPR22)
+      RISCV_MOPR_CASE(mopr23, RISCVISD::MOPR23)
+      RISCV_MOPR_CASE(mopr24, RISCVISD::MOPR24)
+      RISCV_MOPR_CASE(mopr25, RISCVISD::MOPR25)
+      RISCV_MOPR_CASE(mopr26, RISCVISD::MOPR26)
+      RISCV_MOPR_CASE(mopr27, RISCVISD::MOPR27)
+      RISCV_MOPR_CASE(mopr28, RISCVISD::MOPR28)
+      RISCV_MOPR_CASE(mopr29, RISCVISD::MOPR29)
+      RISCV_MOPR_CASE(mopr30, RISCVISD::MOPR30)
+      RISCV_MOPR_CASE(mopr31, RISCVISD::MOPR31)
+#undef RISCV_MOPR_CASE
+#define RISCV_MOPRR_CASE(NAME, OPCODE)                                         \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (!Subtarget.is64Bit() || N->getValueType(0) != MVT::i32)                \
+      return;                                                                  \
+    SDValue NewOp0 =                                                           \
+        DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, N->getOperand(1));          \
+    SDValue NewOp1 =                                                           \
+        DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, N->getOperand(2));          \
+    SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp0, NewOp1);           \
+    Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res));          \
+    return;                                                                    \
+  }
+      RISCV_MOPRR_CASE(moprr0, RISCVISD::MOPRR0)
+      RISCV_MOPRR_CASE(moprr1, RISCVISD::MOPRR1)
+      RISCV_MOPRR_CASE(moprr2, RISCVISD::MOPRR2)
+      RISCV_MOPRR_CASE(moprr3, RISCVISD::MOP...
[truncated]

@llvmbot
Copy link
Collaborator

llvmbot commented Dec 8, 2023

@llvm/pr-subscribers-llvm-support

Author: Jivan Hakobyan (JivanH)

Changes

This implements experimental support for the Zimop extension as specified here:
https://github.com/riscv/riscv-isa-manual/blob/main/src/zimop.adoc.

This change adds intrinsics of mop.r.[n] and mop.rr.[n] instructions for Zimop extension based on
https://github.com/riscv-non-isa/riscv-c-api-doc/blob/master/riscv-c-api.md.


Patch is 39.08 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/74824.diff

22 Files Affected:

  • (modified) clang/include/clang/Basic/BuiltinsRISCV.def (+5)
  • (modified) clang/lib/CodeGen/CGBuiltin.cpp (+34)
  • (modified) clang/lib/Sema/SemaChecking.cpp (+8)
  • (added) clang/test/CodeGen/RISCV/rvb-intrinsics/zimop.c (+104)
  • (modified) llvm/docs/RISCVUsage.rst (+3)
  • (modified) llvm/include/llvm/IR/IntrinsicsRISCV.td (+23)
  • (modified) llvm/lib/Support/RISCVISAInfo.cpp (+2)
  • (modified) llvm/lib/Target/RISCV/RISCVFeatures.td (+5)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+171)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.h (+6)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrFormats.td (+21)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrInfo.td (+53)
  • (modified) llvm/lib/Target/RISCV/RISCVSchedRocket.td (+1)
  • (modified) llvm/lib/Target/RISCV/RISCVSchedSiFive7.td (+1)
  • (modified) llvm/lib/Target/RISCV/RISCVSchedSyntacoreSCR1.td (+1)
  • (modified) llvm/lib/Target/RISCV/RISCVSchedule.td (+14)
  • (modified) llvm/test/CodeGen/RISCV/attributes.ll (+4)
  • (added) llvm/test/CodeGen/RISCV/rv32zimop-intrinsic.ll (+47)
  • (added) llvm/test/CodeGen/RISCV/rv64zimop-intrinsic.ll (+96)
  • (added) llvm/test/MC/RISCV/rv32zimop-invalid.s (+6)
  • (added) llvm/test/MC/RISCV/rvzimop-valid.s (+26)
  • (modified) llvm/unittests/Support/RISCVISAInfoTest.cpp (+1)
diff --git a/clang/include/clang/Basic/BuiltinsRISCV.def b/clang/include/clang/Basic/BuiltinsRISCV.def
index 1528b18c82eade..6ba5288f9cbd1f 100644
--- a/clang/include/clang/Basic/BuiltinsRISCV.def
+++ b/clang/include/clang/Basic/BuiltinsRISCV.def
@@ -89,5 +89,10 @@ TARGET_BUILTIN(__builtin_riscv_sm3p1, "UiUi", "nc", "zksh")
 TARGET_BUILTIN(__builtin_riscv_ntl_load, "v.", "t", "zihintntl")
 TARGET_BUILTIN(__builtin_riscv_ntl_store, "v.", "t", "zihintntl")
 
+TARGET_BUILTIN(__builtin_riscv_mopr_32, "UiUiUi", "nc", "experimental-zimop")
+TARGET_BUILTIN(__builtin_riscv_mopr_64, "UWiUWiUWi", "nc", "experimental-zimop,64bit")
+TARGET_BUILTIN(__builtin_riscv_moprr_32, "UiUiUiUi", "nc", "experimental-zimop")
+TARGET_BUILTIN(__builtin_riscv_moprr_64, "UWiUWiUWiUWi", "nc", "experimental-zimop,64bit")
+
 #undef BUILTIN
 #undef TARGET_BUILTIN
diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 0d8b3e4aaad470..11ba665dda938c 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -20808,6 +20808,10 @@ Value *CodeGenFunction::EmitRISCVBuiltinExpr(unsigned BuiltinID,
   case RISCV::BI__builtin_riscv_clz_64:
   case RISCV::BI__builtin_riscv_ctz_32:
   case RISCV::BI__builtin_riscv_ctz_64:
+  case RISCV::BI__builtin_riscv_mopr_32:
+  case RISCV::BI__builtin_riscv_mopr_64:
+  case RISCV::BI__builtin_riscv_moprr_32:
+  case RISCV::BI__builtin_riscv_moprr_64:
   case RISCV::BI__builtin_riscv_clmul_32:
   case RISCV::BI__builtin_riscv_clmul_64:
   case RISCV::BI__builtin_riscv_clmulh_32:
@@ -20848,6 +20852,36 @@ Value *CodeGenFunction::EmitRISCVBuiltinExpr(unsigned BuiltinID,
       return Result;
     }
 
+    // Zimop
+    case RISCV::BI__builtin_riscv_mopr_32:
+    case RISCV::BI__builtin_riscv_mopr_64: {
+      unsigned N = cast<ConstantInt>(Ops[1])->getZExtValue();
+      Function *F = nullptr;
+      if (N <= 1) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr0 + N, {ResultType});
+      } else if (N >= 10 && N <= 19) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr10 + N - 10, {ResultType});
+      } else if (N == 2) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr2, {ResultType});
+      } else if (N >= 20 && N <= 29) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr20 + N - 20, {ResultType});
+      } else if (N == 3) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr3, {ResultType});
+      } else if (N >= 30 && N <= 31) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr30 + N - 30, {ResultType});
+      } else if (N >= 4 && N <= 9) {
+        F = CGM.getIntrinsic(Intrinsic::riscv_mopr4 + N - 4, {ResultType});
+      } else {
+        llvm_unreachable("unexpected builtin ID");
+      }
+      return Builder.CreateCall(F, {Ops[0]}, "");
+    }
+    case RISCV::BI__builtin_riscv_moprr_32:
+    case RISCV::BI__builtin_riscv_moprr_64: {
+      unsigned N = cast<ConstantInt>(Ops[2])->getZExtValue();
+      Function *F = CGM.getIntrinsic(Intrinsic::riscv_moprr0 + N, {ResultType});
+      return Builder.CreateCall(F, {Ops[0], Ops[1]}, "");
+    }
     // Zbc
     case RISCV::BI__builtin_riscv_clmul_32:
     case RISCV::BI__builtin_riscv_clmul_64:
diff --git a/clang/lib/Sema/SemaChecking.cpp b/clang/lib/Sema/SemaChecking.cpp
index fc6ee6b2c5ab4f..80ca886bda6ec2 100644
--- a/clang/lib/Sema/SemaChecking.cpp
+++ b/clang/lib/Sema/SemaChecking.cpp
@@ -5468,6 +5468,14 @@ bool Sema::CheckRISCVBuiltinFunctionCall(const TargetInfo &TI,
   // Check if rnum is in [0, 10]
   case RISCV::BI__builtin_riscv_aes64ks1i:
     return SemaBuiltinConstantArgRange(TheCall, 1, 0, 10);
+  // Check if n of mop.r.[n] is in [0, 31]
+  case RISCV::BI__builtin_riscv_mopr_32:
+  case RISCV::BI__builtin_riscv_mopr_64:
+    return SemaBuiltinConstantArgRange(TheCall, 1, 0, 31);
+  // Check if n of mop.rr.[n] is in [0, 7]
+  case RISCV::BI__builtin_riscv_moprr_32:
+  case RISCV::BI__builtin_riscv_moprr_64:
+    return SemaBuiltinConstantArgRange(TheCall, 2, 0, 7);
   // Check if value range for vxrm is in [0, 3]
   case RISCVVector::BI__builtin_rvv_vaaddu_vv:
   case RISCVVector::BI__builtin_rvv_vaaddu_vx:
diff --git a/clang/test/CodeGen/RISCV/rvb-intrinsics/zimop.c b/clang/test/CodeGen/RISCV/rvb-intrinsics/zimop.c
new file mode 100644
index 00000000000000..790c746f84606f
--- /dev/null
+++ b/clang/test/CodeGen/RISCV/rvb-intrinsics/zimop.c
@@ -0,0 +1,104 @@
+// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py
+// RUN: %clang_cc1 -triple riscv32 -target-feature +experimental-zimop -emit-llvm %s -o - \
+// RUN:     -disable-O0-optnone | opt -S -passes=mem2reg \
+// RUN:     | FileCheck %s  -check-prefix=RV32ZIMOP
+// RUN: %clang_cc1 -triple riscv64 -target-feature +experimental-zimop -emit-llvm %s -o - \
+// RUN:     -disable-O0-optnone | opt -S -passes=mem2reg \
+// RUN:     | FileCheck %s  -check-prefix=RV64ZIMOP
+
+#include <stdint.h>
+
+#if __riscv_xlen == 64
+// RV64ZIMOP-LABEL: @mopr_0_64(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i64 @llvm.riscv.mopr0.i64(i64 [[A:%.*]])
+// RV64ZIMOP-NEXT:    ret i64 [[TMP0]]
+//
+uint64_t mopr_0_64(uint64_t a) {
+  return __builtin_riscv_mopr_64(a, 0);
+}
+
+// RV64ZIMOP-LABEL: @mopr_31_64(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i64 @llvm.riscv.mopr31.i64(i64 [[A:%.*]])
+// RV64ZIMOP-NEXT:    ret i64 [[TMP0]]
+//
+uint64_t mopr_31_64(uint64_t a) {
+  return __builtin_riscv_mopr_64(a, 31);
+}
+
+// RV64ZIMOP-LABEL: @moprr_0_64(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i64 @llvm.riscv.moprr0.i64(i64 [[A:%.*]], i64 [[B:%.*]])
+// RV64ZIMOP-NEXT:    ret i64 [[TMP0]]
+//
+uint64_t moprr_0_64(uint64_t a, uint64_t b) {
+  return __builtin_riscv_moprr_64(a, b, 0);
+}
+
+// RV64ZIMOP-LABEL: @moprr_7_64(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i64 @llvm.riscv.moprr7.i64(i64 [[A:%.*]], i64 [[B:%.*]])
+// RV64ZIMOP-NEXT:    ret i64 [[TMP0]]
+//
+uint64_t moprr_7_64(uint64_t a, uint64_t b) {
+  return __builtin_riscv_moprr_64(a, b, 7);
+}
+
+#endif
+
+// RV32ZIMOP-LABEL: @mopr_0_32(
+// RV32ZIMOP-NEXT:  entry:
+// RV32ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.mopr0.i32(i32 [[A:%.*]])
+// RV32ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+// RV64ZIMOP-LABEL: @mopr_0_32(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.mopr0.i32(i32 [[A:%.*]])
+// RV64ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+uint32_t mopr_0_32(uint32_t a) {
+  return __builtin_riscv_mopr_32(a, 0);
+}
+
+// RV32ZIMOP-LABEL: @mopr_31_32(
+// RV32ZIMOP-NEXT:  entry:
+// RV32ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.mopr31.i32(i32 [[A:%.*]])
+// RV32ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+// RV64ZIMOP-LABEL: @mopr_31_32(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.mopr31.i32(i32 [[A:%.*]])
+// RV64ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+uint32_t mopr_31_32(uint32_t a) {
+  return __builtin_riscv_mopr_32(a, 31);
+}
+
+// RV32ZIMOP-LABEL: @moprr_0_32(
+// RV32ZIMOP-NEXT:  entry:
+// RV32ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.moprr0.i32(i32 [[A:%.*]], i32 [[B:%.*]])
+// RV32ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+// RV64ZIMOP-LABEL: @moprr_0_32(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.moprr0.i32(i32 [[A:%.*]], i32 [[B:%.*]])
+// RV64ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+uint32_t moprr_0_32(uint32_t a, uint32_t b) {
+  return __builtin_riscv_moprr_32(a, b, 0);
+}
+
+// RV32ZIMOP-LABEL: @moprr_7_32(
+// RV32ZIMOP-NEXT:  entry:
+// RV32ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.moprr7.i32(i32 [[A:%.*]], i32 [[B:%.*]])
+// RV32ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+// RV64ZIMOP-LABEL: @moprr_7_32(
+// RV64ZIMOP-NEXT:  entry:
+// RV64ZIMOP-NEXT:    [[TMP0:%.*]] = call i32 @llvm.riscv.moprr7.i32(i32 [[A:%.*]], i32 [[B:%.*]])
+// RV64ZIMOP-NEXT:    ret i32 [[TMP0]]
+//
+uint32_t moprr_7_32(uint32_t a, uint32_t b) {
+  return __builtin_riscv_moprr_32(a, b, 7);
+}
\ No newline at end of file
diff --git a/llvm/docs/RISCVUsage.rst b/llvm/docs/RISCVUsage.rst
index 65dd0d83448ed1..bd2f81fba186df 100644
--- a/llvm/docs/RISCVUsage.rst
+++ b/llvm/docs/RISCVUsage.rst
@@ -208,6 +208,9 @@ The primary goal of experimental support is to assist in the process of ratifica
 ``experimental-zvbb``, ``experimental-zvbc``, ``experimental-zvkb``, ``experimental-zvkg``, ``experimental-zvkn``, ``experimental-zvknc``, ``experimental-zvkned``, ``experimental-zvkng``, ``experimental-zvknha``, ``experimental-zvknhb``, ``experimental-zvks``, ``experimental-zvksc``, ``experimental-zvksed``, ``experimental-zvksg``, ``experimental-zvksh``, ``experimental-zvkt``
   LLVM implements the `1.0.0-rc2 specification <https://github.com/riscv/riscv-crypto/releases/download/v/riscv-crypto-spec-vector.pdf>`__. Note that current vector crypto extension version can be found in: <https://github.com/riscv/riscv-crypto>.
 
+``experimental-zimop``
+  LLVM implements the `v0.1 proposed specification <https://github.com/riscv/riscv-isa-manual/blob/main/src/zimop.adoc>`__.
+
 To use an experimental extension from `clang`, you must add `-menable-experimental-extensions` to the command line, and specify the exact version of the experimental extension you are using.  To use an experimental extension with LLVM's internal developer tools (e.g. `llc`, `llvm-objdump`, `llvm-mc`), you must prefix the extension name with `experimental-`.  Note that you don't need to specify the version with internal tools, and shouldn't include the `experimental-` prefix with `clang`.
 
 Vendor Extensions
diff --git a/llvm/include/llvm/IR/IntrinsicsRISCV.td b/llvm/include/llvm/IR/IntrinsicsRISCV.td
index 20c6a525a86ba7..fcb11c8c51398d 100644
--- a/llvm/include/llvm/IR/IntrinsicsRISCV.td
+++ b/llvm/include/llvm/IR/IntrinsicsRISCV.td
@@ -108,6 +108,29 @@ let TargetPrefix = "riscv" in {
   def int_riscv_xperm8  : BitManipGPRGPRIntrinsics;
 } // TargetPrefix = "riscv"
 
+//===----------------------------------------------------------------------===//
+// May-Be-Operations
+
+let TargetPrefix = "riscv" in {
+
+  class MOPGPRIntrinsics
+      : DefaultAttrsIntrinsic<[llvm_any_ty],
+                              [LLVMMatchType<0>],
+                              [IntrNoMem, IntrSpeculatable]>;
+  class MOPGPRGPRIntrinsics
+      : DefaultAttrsIntrinsic<[llvm_any_ty],
+                              [LLVMMatchType<0>, LLVMMatchType<0>],
+                              [IntrNoMem, IntrSpeculatable]>;
+
+  // Zimop
+   foreach i = 0...31 in {
+    def int_riscv_mopr#i : MOPGPRIntrinsics;
+   }
+  foreach i = 0...7 in {
+    def int_riscv_moprr#i : MOPGPRGPRIntrinsics;
+  }
+} // TargetPrefix = "riscv"
+
 //===----------------------------------------------------------------------===//
 // Vectors
 
diff --git a/llvm/lib/Support/RISCVISAInfo.cpp b/llvm/lib/Support/RISCVISAInfo.cpp
index 6322748430063c..1b303ba1e9431d 100644
--- a/llvm/lib/Support/RISCVISAInfo.cpp
+++ b/llvm/lib/Support/RISCVISAInfo.cpp
@@ -177,6 +177,8 @@ static const RISCVSupportedExtension SupportedExperimentalExtensions[] = {
     {"zicfilp", RISCVExtensionVersion{0, 2}},
     {"zicond", RISCVExtensionVersion{1, 0}},
 
+    {"zimop", RISCVExtensionVersion{0, 1}},
+
     {"ztso", RISCVExtensionVersion{0, 1}},
 
     {"zvbb", RISCVExtensionVersion{1, 0}},
diff --git a/llvm/lib/Target/RISCV/RISCVFeatures.td b/llvm/lib/Target/RISCV/RISCVFeatures.td
index 7d142d38d0f9ad..eddb7c33627f0f 100644
--- a/llvm/lib/Target/RISCV/RISCVFeatures.td
+++ b/llvm/lib/Target/RISCV/RISCVFeatures.td
@@ -687,6 +687,11 @@ def HasStdExtZicond : Predicate<"Subtarget->hasStdExtZicond()">,
                                 AssemblerPredicate<(all_of FeatureStdExtZicond),
                                 "'Zicond' (Integer Conditional Operations)">;
 
+def FeatureStdExtZimop : SubtargetFeature<"experimental-zimop", "HasStdExtZimop", "true",
+                                          "'Zimop' (May-Be-Operations)">;
+def HasStdExtZimop : Predicate<"Subtarget->hasStdExtZimop()">,
+                               AssemblerPredicate<(all_of FeatureStdExtZimop),
+                               "'Zimop' (May-Be-Operations)">;
 def FeatureStdExtSmaia
     : SubtargetFeature<"smaia", "HasStdExtSmaia", "true",
                        "'Smaia' (Smaia encompasses all added CSRs and all "
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index f2ec422b54a926..45fbea20885593 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -8367,6 +8367,73 @@ SDValue RISCVTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
         IntNo == Intrinsic::riscv_zip ? RISCVISD::ZIP : RISCVISD::UNZIP;
     return DAG.getNode(Opc, DL, XLenVT, Op.getOperand(1));
   }
+#define RISCV_MOPR_64_CASE(NAME, OPCODE)                                       \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (RV64LegalI32 && Subtarget.is64Bit() &&                                 \
+        Op.getValueType() == MVT::i32) {                                       \
+      SDValue NewOp =                                                          \
+          DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, Op.getOperand(1));        \
+      SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp);                  \
+      return DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res);                    \
+    }                                                                          \
+    return DAG.getNode(OPCODE, DL, XLenVT, Op.getOperand(1));                  \
+  }
+    RISCV_MOPR_64_CASE(mopr0, RISCVISD::MOPR0)
+    RISCV_MOPR_64_CASE(mopr1, RISCVISD::MOPR1)
+    RISCV_MOPR_64_CASE(mopr2, RISCVISD::MOPR2)
+    RISCV_MOPR_64_CASE(mopr3, RISCVISD::MOPR3)
+    RISCV_MOPR_64_CASE(mopr4, RISCVISD::MOPR4)
+    RISCV_MOPR_64_CASE(mopr5, RISCVISD::MOPR5)
+    RISCV_MOPR_64_CASE(mopr6, RISCVISD::MOPR6)
+    RISCV_MOPR_64_CASE(mopr7, RISCVISD::MOPR7)
+    RISCV_MOPR_64_CASE(mopr8, RISCVISD::MOPR8)
+    RISCV_MOPR_64_CASE(mopr9, RISCVISD::MOPR9)
+    RISCV_MOPR_64_CASE(mopr10, RISCVISD::MOPR10)
+    RISCV_MOPR_64_CASE(mopr11, RISCVISD::MOPR11)
+    RISCV_MOPR_64_CASE(mopr12, RISCVISD::MOPR12)
+    RISCV_MOPR_64_CASE(mopr13, RISCVISD::MOPR13)
+    RISCV_MOPR_64_CASE(mopr14, RISCVISD::MOPR14)
+    RISCV_MOPR_64_CASE(mopr15, RISCVISD::MOPR15)
+    RISCV_MOPR_64_CASE(mopr16, RISCVISD::MOPR16)
+    RISCV_MOPR_64_CASE(mopr17, RISCVISD::MOPR17)
+    RISCV_MOPR_64_CASE(mopr18, RISCVISD::MOPR18)
+    RISCV_MOPR_64_CASE(mopr19, RISCVISD::MOPR19)
+    RISCV_MOPR_64_CASE(mopr20, RISCVISD::MOPR20)
+    RISCV_MOPR_64_CASE(mopr21, RISCVISD::MOPR21)
+    RISCV_MOPR_64_CASE(mopr22, RISCVISD::MOPR22)
+    RISCV_MOPR_64_CASE(mopr23, RISCVISD::MOPR23)
+    RISCV_MOPR_64_CASE(mopr24, RISCVISD::MOPR24)
+    RISCV_MOPR_64_CASE(mopr25, RISCVISD::MOPR25)
+    RISCV_MOPR_64_CASE(mopr26, RISCVISD::MOPR26)
+    RISCV_MOPR_64_CASE(mopr27, RISCVISD::MOPR27)
+    RISCV_MOPR_64_CASE(mopr28, RISCVISD::MOPR28)
+    RISCV_MOPR_64_CASE(mopr29, RISCVISD::MOPR29)
+    RISCV_MOPR_64_CASE(mopr30, RISCVISD::MOPR30)
+    RISCV_MOPR_64_CASE(mopr31, RISCVISD::MOPR31)
+#undef RISCV_MOPR_64_CASE
+#define RISCV_MOPRR_64_CASE(NAME, OPCODE)                                      \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (RV64LegalI32 && Subtarget.is64Bit() &&                                 \
+        Op.getValueType() == MVT::i32) {                                       \
+      SDValue NewOp0 =                                                         \
+          DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, Op.getOperand(1));        \
+      SDValue NewOp1 =                                                         \
+          DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, Op.getOperand(2));        \
+      SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp0, NewOp1);         \
+      return DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res);                    \
+    }                                                                          \
+    return DAG.getNode(OPCODE, DL, XLenVT, Op.getOperand(1),                   \
+                       Op.getOperand(2));                                      \
+  }
+    RISCV_MOPRR_64_CASE(moprr0, RISCVISD::MOPRR0)
+    RISCV_MOPRR_64_CASE(moprr1, RISCVISD::MOPRR1)
+    RISCV_MOPRR_64_CASE(moprr2, RISCVISD::MOPRR2)
+    RISCV_MOPRR_64_CASE(moprr3, RISCVISD::MOPRR3)
+    RISCV_MOPRR_64_CASE(moprr4, RISCVISD::MOPRR4)
+    RISCV_MOPRR_64_CASE(moprr5, RISCVISD::MOPRR5)
+    RISCV_MOPRR_64_CASE(moprr6, RISCVISD::MOPRR6)
+    RISCV_MOPRR_64_CASE(moprr7, RISCVISD::MOPRR7)
+#undef RISCV_MOPRR_64_CASE
   case Intrinsic::riscv_clmul:
     if (RV64LegalI32 && Subtarget.is64Bit() && Op.getValueType() == MVT::i32) {
       SDValue NewOp0 =
@@ -11633,6 +11700,70 @@ void RISCVTargetLowering::ReplaceNodeResults(SDNode *N,
       Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res));
       return;
     }
+#define RISCV_MOPR_CASE(NAME, OPCODE)                                          \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (!Subtarget.is64Bit() || N->getValueType(0) != MVT::i32)                \
+      return;                                                                  \
+    SDValue NewOp =                                                            \
+        DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, N->getOperand(1));          \
+    SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp);                    \
+    Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res));          \
+    return;                                                                    \
+  }
+      RISCV_MOPR_CASE(mopr0, RISCVISD::MOPR0)
+      RISCV_MOPR_CASE(mopr1, RISCVISD::MOPR1)
+      RISCV_MOPR_CASE(mopr2, RISCVISD::MOPR2)
+      RISCV_MOPR_CASE(mopr3, RISCVISD::MOPR3)
+      RISCV_MOPR_CASE(mopr4, RISCVISD::MOPR4)
+      RISCV_MOPR_CASE(mopr5, RISCVISD::MOPR5)
+      RISCV_MOPR_CASE(mopr6, RISCVISD::MOPR6)
+      RISCV_MOPR_CASE(mopr7, RISCVISD::MOPR7)
+      RISCV_MOPR_CASE(mopr8, RISCVISD::MOPR8)
+      RISCV_MOPR_CASE(mopr9, RISCVISD::MOPR9)
+      RISCV_MOPR_CASE(mopr10, RISCVISD::MOPR10)
+      RISCV_MOPR_CASE(mopr11, RISCVISD::MOPR11)
+      RISCV_MOPR_CASE(mopr12, RISCVISD::MOPR12)
+      RISCV_MOPR_CASE(mopr13, RISCVISD::MOPR13)
+      RISCV_MOPR_CASE(mopr14, RISCVISD::MOPR14)
+      RISCV_MOPR_CASE(mopr15, RISCVISD::MOPR15)
+      RISCV_MOPR_CASE(mopr16, RISCVISD::MOPR16)
+      RISCV_MOPR_CASE(mopr17, RISCVISD::MOPR17)
+      RISCV_MOPR_CASE(mopr18, RISCVISD::MOPR18)
+      RISCV_MOPR_CASE(mopr19, RISCVISD::MOPR19)
+      RISCV_MOPR_CASE(mopr20, RISCVISD::MOPR20)
+      RISCV_MOPR_CASE(mopr21, RISCVISD::MOPR21)
+      RISCV_MOPR_CASE(mopr22, RISCVISD::MOPR22)
+      RISCV_MOPR_CASE(mopr23, RISCVISD::MOPR23)
+      RISCV_MOPR_CASE(mopr24, RISCVISD::MOPR24)
+      RISCV_MOPR_CASE(mopr25, RISCVISD::MOPR25)
+      RISCV_MOPR_CASE(mopr26, RISCVISD::MOPR26)
+      RISCV_MOPR_CASE(mopr27, RISCVISD::MOPR27)
+      RISCV_MOPR_CASE(mopr28, RISCVISD::MOPR28)
+      RISCV_MOPR_CASE(mopr29, RISCVISD::MOPR29)
+      RISCV_MOPR_CASE(mopr30, RISCVISD::MOPR30)
+      RISCV_MOPR_CASE(mopr31, RISCVISD::MOPR31)
+#undef RISCV_MOPR_CASE
+#define RISCV_MOPRR_CASE(NAME, OPCODE)                                         \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (!Subtarget.is64Bit() || N->getValueType(0) != MVT::i32)                \
+      return;                                                                  \
+    SDValue NewOp0 =                                                           \
+        DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, N->getOperand(1));          \
+    SDValue NewOp1 =                                                           \
+        DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, N->getOperand(2));          \
+    SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp0, NewOp1);           \
+    Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res));          \
+    return;                                                                    \
+  }
+      RISCV_MOPRR_CASE(moprr0, RISCVISD::MOPRR0)
+      RISCV_MOPRR_CASE(moprr1, RISCVISD::MOPRR1)
+      RISCV_MOPRR_CASE(moprr2, RISCVISD::MOPRR2)
+      RISCV_MOPRR_CASE(mop...
[truncated]

@JivanH
Copy link
Contributor Author

JivanH commented Dec 8, 2023

@dtcxzyw
Copy link
Member

dtcxzyw commented Dec 8, 2023

I guess you should split it into patch series.

  • MC support (and docs)
  • Sched support
  • ISel support
  • Builtin intrinsic support in clang

@wangpc-pp
Copy link
Contributor

wangpc-pp commented Dec 11, 2023

Please reorganize the patch as @dtcxzyw suggested. :-)

I didn't notice this extension before, so I may not be asking the right question here: These MOPs can be redefined, then, are we able to schedule them in compiler? Becase we don't know the cost of MOPs if we don't know how MOPs are used. For example, MOPs can be redefined as simple ALU instructions, or, it can be instructions with large cost like DIV/REM. I don't know how to model it now, but I don't think defining Sched resources for MOPs is the right way.

@wangpc-pp
Copy link
Contributor

Please reorganize the patch as @dtcxzyw suggested. :-)

I didn't notice this extension before, so I may not be asking the right question here: These MOPs can be redefined, then, are we able to schedule them in compiler? Becase we don't know the cost of MOPs if we don't know how MOPs are used. For example, MOPs can be redefined as simple ALU instructions, or, it can be instructions with large cost like DIV/REM. I don't know how to model it now, but I don't think defining Sched resources for MOPs is the right way.

Just checked similar extensions in ARM (like AUT, PAC, BTI, etc.), these extensions are defined in NOP encoding space and their schedule resources will be overrided by InstRW.

@JivanH
Copy link
Contributor Author

JivanH commented Dec 11, 2023

@dtcxzyw @wangpc-pp

Thank you for your review and suggestions. Will do as you suggested and will send the patch series.

@topperc
Copy link
Collaborator

topperc commented Dec 11, 2023

What is the use case for someone to write the maybe ops directly? Wouldn't you program to the extensions built on top of them like Zicfiss?

@JivanH JivanH closed this Dec 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend:RISC-V clang:codegen clang:frontend Language frontend issues, e.g. anything involving "Sema" clang Clang issues not falling into any other category llvm:ir llvm:support mc Machine (object) code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants