Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RISCV][ISel] Add ISel support for experimental Zimop extension #77089

Merged
merged 6 commits into from
Jan 29, 2024

Conversation

JivanH
Copy link
Contributor

@JivanH JivanH commented Jan 5, 2024

This implements ISel support for mopr[0-31] and moprr[0-7] instructions for 32 and 64 bits

This implements ISel support for mopr[0-31] and moprr[0-8] instructions for 32 and 64 bits
@llvmbot
Copy link
Collaborator

llvmbot commented Jan 5, 2024

@llvm/pr-subscribers-backend-risc-v

@llvm/pr-subscribers-llvm-ir

Author: Jivan Hakobyan (JivanH)

Changes

This implements ISel support for mopr[0-31] and moprr[0-8] instructions for 32 and 64 bits


Full diff: https://github.com/llvm/llvm-project/pull/77089.diff

6 Files Affected:

  • (modified) llvm/include/llvm/IR/IntrinsicsRISCV.td (+23)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+171)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.h (+7)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrInfoZimop.td (+28)
  • (added) llvm/test/CodeGen/RISCV/rv32zimop-intrinsic.ll (+48)
  • (added) llvm/test/CodeGen/RISCV/rv64zimop-intrinsic.ll (+97)
diff --git a/llvm/include/llvm/IR/IntrinsicsRISCV.td b/llvm/include/llvm/IR/IntrinsicsRISCV.td
index a391bc53cdb0e9..8ddda2a13e5c3b 100644
--- a/llvm/include/llvm/IR/IntrinsicsRISCV.td
+++ b/llvm/include/llvm/IR/IntrinsicsRISCV.td
@@ -108,6 +108,29 @@ let TargetPrefix = "riscv" in {
   def int_riscv_xperm8  : BitManipGPRGPRIntrinsics;
 } // TargetPrefix = "riscv"
 
+//===----------------------------------------------------------------------===//
+// May-Be-Operations
+
+let TargetPrefix = "riscv" in {
+
+  class MOPGPRIntrinsics
+      : DefaultAttrsIntrinsic<[llvm_any_ty],
+                              [LLVMMatchType<0>],
+                              [IntrNoMem, IntrSpeculatable]>;
+  class MOPGPRGPRIntrinsics
+      : DefaultAttrsIntrinsic<[llvm_any_ty],
+                              [LLVMMatchType<0>, LLVMMatchType<0>],
+                              [IntrNoMem, IntrSpeculatable]>;
+
+  // Zimop
+   foreach i = 0...31 in {
+    def int_riscv_mopr#i : MOPGPRIntrinsics;
+   }
+  foreach i = 0...7 in {
+    def int_riscv_moprr#i : MOPGPRGPRIntrinsics;
+  }
+} // TargetPrefix = "riscv"
+
 //===----------------------------------------------------------------------===//
 // Vectors
 
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index bc4b2b022c0ae9..f8c10fcd139f82 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -8404,6 +8404,73 @@ SDValue RISCVTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
         IntNo == Intrinsic::riscv_zip ? RISCVISD::ZIP : RISCVISD::UNZIP;
     return DAG.getNode(Opc, DL, XLenVT, Op.getOperand(1));
   }
+#define RISCV_MOPR_64_CASE(NAME, OPCODE)                                       \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (RV64LegalI32 && Subtarget.is64Bit() &&                                 \
+        Op.getValueType() == MVT::i32) {                                       \
+      SDValue NewOp =                                                          \
+          DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, Op.getOperand(1));        \
+      SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp);                  \
+      return DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res);                    \
+    }                                                                          \
+    return DAG.getNode(OPCODE, DL, XLenVT, Op.getOperand(1));                  \
+  }
+    RISCV_MOPR_64_CASE(mopr0, RISCVISD::MOPR0)
+    RISCV_MOPR_64_CASE(mopr1, RISCVISD::MOPR1)
+    RISCV_MOPR_64_CASE(mopr2, RISCVISD::MOPR2)
+    RISCV_MOPR_64_CASE(mopr3, RISCVISD::MOPR3)
+    RISCV_MOPR_64_CASE(mopr4, RISCVISD::MOPR4)
+    RISCV_MOPR_64_CASE(mopr5, RISCVISD::MOPR5)
+    RISCV_MOPR_64_CASE(mopr6, RISCVISD::MOPR6)
+    RISCV_MOPR_64_CASE(mopr7, RISCVISD::MOPR7)
+    RISCV_MOPR_64_CASE(mopr8, RISCVISD::MOPR8)
+    RISCV_MOPR_64_CASE(mopr9, RISCVISD::MOPR9)
+    RISCV_MOPR_64_CASE(mopr10, RISCVISD::MOPR10)
+    RISCV_MOPR_64_CASE(mopr11, RISCVISD::MOPR11)
+    RISCV_MOPR_64_CASE(mopr12, RISCVISD::MOPR12)
+    RISCV_MOPR_64_CASE(mopr13, RISCVISD::MOPR13)
+    RISCV_MOPR_64_CASE(mopr14, RISCVISD::MOPR14)
+    RISCV_MOPR_64_CASE(mopr15, RISCVISD::MOPR15)
+    RISCV_MOPR_64_CASE(mopr16, RISCVISD::MOPR16)
+    RISCV_MOPR_64_CASE(mopr17, RISCVISD::MOPR17)
+    RISCV_MOPR_64_CASE(mopr18, RISCVISD::MOPR18)
+    RISCV_MOPR_64_CASE(mopr19, RISCVISD::MOPR19)
+    RISCV_MOPR_64_CASE(mopr20, RISCVISD::MOPR20)
+    RISCV_MOPR_64_CASE(mopr21, RISCVISD::MOPR21)
+    RISCV_MOPR_64_CASE(mopr22, RISCVISD::MOPR22)
+    RISCV_MOPR_64_CASE(mopr23, RISCVISD::MOPR23)
+    RISCV_MOPR_64_CASE(mopr24, RISCVISD::MOPR24)
+    RISCV_MOPR_64_CASE(mopr25, RISCVISD::MOPR25)
+    RISCV_MOPR_64_CASE(mopr26, RISCVISD::MOPR26)
+    RISCV_MOPR_64_CASE(mopr27, RISCVISD::MOPR27)
+    RISCV_MOPR_64_CASE(mopr28, RISCVISD::MOPR28)
+    RISCV_MOPR_64_CASE(mopr29, RISCVISD::MOPR29)
+    RISCV_MOPR_64_CASE(mopr30, RISCVISD::MOPR30)
+    RISCV_MOPR_64_CASE(mopr31, RISCVISD::MOPR31)
+#undef RISCV_MOPR_64_CASE
+#define RISCV_MOPRR_64_CASE(NAME, OPCODE)                                      \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (RV64LegalI32 && Subtarget.is64Bit() &&                                 \
+        Op.getValueType() == MVT::i32) {                                       \
+      SDValue NewOp0 =                                                         \
+          DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, Op.getOperand(1));        \
+      SDValue NewOp1 =                                                         \
+          DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, Op.getOperand(2));        \
+      SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp0, NewOp1);         \
+      return DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res);                    \
+    }                                                                          \
+    return DAG.getNode(OPCODE, DL, XLenVT, Op.getOperand(1),                   \
+                       Op.getOperand(2));                                      \
+  }
+    RISCV_MOPRR_64_CASE(moprr0, RISCVISD::MOPRR0)
+    RISCV_MOPRR_64_CASE(moprr1, RISCVISD::MOPRR1)
+    RISCV_MOPRR_64_CASE(moprr2, RISCVISD::MOPRR2)
+    RISCV_MOPRR_64_CASE(moprr3, RISCVISD::MOPRR3)
+    RISCV_MOPRR_64_CASE(moprr4, RISCVISD::MOPRR4)
+    RISCV_MOPRR_64_CASE(moprr5, RISCVISD::MOPRR5)
+    RISCV_MOPRR_64_CASE(moprr6, RISCVISD::MOPRR6)
+    RISCV_MOPRR_64_CASE(moprr7, RISCVISD::MOPRR7)
+#undef RISCV_MOPRR_64_CASE
   case Intrinsic::riscv_clmul:
     if (RV64LegalI32 && Subtarget.is64Bit() && Op.getValueType() == MVT::i32) {
       SDValue NewOp0 =
@@ -11794,6 +11861,70 @@ void RISCVTargetLowering::ReplaceNodeResults(SDNode *N,
       Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res));
       return;
     }
+#define RISCV_MOPR_CASE(NAME, OPCODE)                                          \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (!Subtarget.is64Bit() || N->getValueType(0) != MVT::i32)                \
+      return;                                                                  \
+    SDValue NewOp =                                                            \
+        DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, N->getOperand(1));          \
+    SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp);                    \
+    Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res));          \
+    return;                                                                    \
+  }
+      RISCV_MOPR_CASE(mopr0, RISCVISD::MOPR0)
+      RISCV_MOPR_CASE(mopr1, RISCVISD::MOPR1)
+      RISCV_MOPR_CASE(mopr2, RISCVISD::MOPR2)
+      RISCV_MOPR_CASE(mopr3, RISCVISD::MOPR3)
+      RISCV_MOPR_CASE(mopr4, RISCVISD::MOPR4)
+      RISCV_MOPR_CASE(mopr5, RISCVISD::MOPR5)
+      RISCV_MOPR_CASE(mopr6, RISCVISD::MOPR6)
+      RISCV_MOPR_CASE(mopr7, RISCVISD::MOPR7)
+      RISCV_MOPR_CASE(mopr8, RISCVISD::MOPR8)
+      RISCV_MOPR_CASE(mopr9, RISCVISD::MOPR9)
+      RISCV_MOPR_CASE(mopr10, RISCVISD::MOPR10)
+      RISCV_MOPR_CASE(mopr11, RISCVISD::MOPR11)
+      RISCV_MOPR_CASE(mopr12, RISCVISD::MOPR12)
+      RISCV_MOPR_CASE(mopr13, RISCVISD::MOPR13)
+      RISCV_MOPR_CASE(mopr14, RISCVISD::MOPR14)
+      RISCV_MOPR_CASE(mopr15, RISCVISD::MOPR15)
+      RISCV_MOPR_CASE(mopr16, RISCVISD::MOPR16)
+      RISCV_MOPR_CASE(mopr17, RISCVISD::MOPR17)
+      RISCV_MOPR_CASE(mopr18, RISCVISD::MOPR18)
+      RISCV_MOPR_CASE(mopr19, RISCVISD::MOPR19)
+      RISCV_MOPR_CASE(mopr20, RISCVISD::MOPR20)
+      RISCV_MOPR_CASE(mopr21, RISCVISD::MOPR21)
+      RISCV_MOPR_CASE(mopr22, RISCVISD::MOPR22)
+      RISCV_MOPR_CASE(mopr23, RISCVISD::MOPR23)
+      RISCV_MOPR_CASE(mopr24, RISCVISD::MOPR24)
+      RISCV_MOPR_CASE(mopr25, RISCVISD::MOPR25)
+      RISCV_MOPR_CASE(mopr26, RISCVISD::MOPR26)
+      RISCV_MOPR_CASE(mopr27, RISCVISD::MOPR27)
+      RISCV_MOPR_CASE(mopr28, RISCVISD::MOPR28)
+      RISCV_MOPR_CASE(mopr29, RISCVISD::MOPR29)
+      RISCV_MOPR_CASE(mopr30, RISCVISD::MOPR30)
+      RISCV_MOPR_CASE(mopr31, RISCVISD::MOPR31)
+#undef RISCV_MOPR_CASE
+#define RISCV_MOPRR_CASE(NAME, OPCODE)                                         \
+  case Intrinsic::riscv_##NAME: {                                              \
+    if (!Subtarget.is64Bit() || N->getValueType(0) != MVT::i32)                \
+      return;                                                                  \
+    SDValue NewOp0 =                                                           \
+        DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, N->getOperand(1));          \
+    SDValue NewOp1 =                                                           \
+        DAG.getNode(ISD::ANY_EXTEND, DL, MVT::i64, N->getOperand(2));          \
+    SDValue Res = DAG.getNode(OPCODE, DL, MVT::i64, NewOp0, NewOp1);           \
+    Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, MVT::i32, Res));          \
+    return;                                                                    \
+  }
+      RISCV_MOPRR_CASE(moprr0, RISCVISD::MOPRR0)
+      RISCV_MOPRR_CASE(moprr1, RISCVISD::MOPRR1)
+      RISCV_MOPRR_CASE(moprr2, RISCVISD::MOPRR2)
+      RISCV_MOPRR_CASE(moprr3, RISCVISD::MOPRR3)
+      RISCV_MOPRR_CASE(moprr4, RISCVISD::MOPRR4)
+      RISCV_MOPRR_CASE(moprr5, RISCVISD::MOPRR5)
+      RISCV_MOPRR_CASE(moprr6, RISCVISD::MOPRR6)
+      RISCV_MOPRR_CASE(moprr7, RISCVISD::MOPRR7)
+#undef RISCV_MOPRR_CASE
     case Intrinsic::riscv_clmul: {
       if (!Subtarget.is64Bit() || N->getValueType(0) != MVT::i32)
         return;
@@ -18549,6 +18680,46 @@ const char *RISCVTargetLowering::getTargetNodeName(unsigned Opcode) const {
   NODE_NAME_CASE(CLMUL)
   NODE_NAME_CASE(CLMULH)
   NODE_NAME_CASE(CLMULR)
+  NODE_NAME_CASE(MOPR0)
+  NODE_NAME_CASE(MOPR1)
+  NODE_NAME_CASE(MOPR2)
+  NODE_NAME_CASE(MOPR3)
+  NODE_NAME_CASE(MOPR4)
+  NODE_NAME_CASE(MOPR5)
+  NODE_NAME_CASE(MOPR6)
+  NODE_NAME_CASE(MOPR7)
+  NODE_NAME_CASE(MOPR8)
+  NODE_NAME_CASE(MOPR9)
+  NODE_NAME_CASE(MOPR10)
+  NODE_NAME_CASE(MOPR11)
+  NODE_NAME_CASE(MOPR12)
+  NODE_NAME_CASE(MOPR13)
+  NODE_NAME_CASE(MOPR14)
+  NODE_NAME_CASE(MOPR15)
+  NODE_NAME_CASE(MOPR16)
+  NODE_NAME_CASE(MOPR17)
+  NODE_NAME_CASE(MOPR18)
+  NODE_NAME_CASE(MOPR19)
+  NODE_NAME_CASE(MOPR20)
+  NODE_NAME_CASE(MOPR21)
+  NODE_NAME_CASE(MOPR22)
+  NODE_NAME_CASE(MOPR23)
+  NODE_NAME_CASE(MOPR24)
+  NODE_NAME_CASE(MOPR25)
+  NODE_NAME_CASE(MOPR26)
+  NODE_NAME_CASE(MOPR27)
+  NODE_NAME_CASE(MOPR28)
+  NODE_NAME_CASE(MOPR29)
+  NODE_NAME_CASE(MOPR30)
+  NODE_NAME_CASE(MOPR31)
+  NODE_NAME_CASE(MOPRR0)
+  NODE_NAME_CASE(MOPRR1)
+  NODE_NAME_CASE(MOPRR2)
+  NODE_NAME_CASE(MOPRR3)
+  NODE_NAME_CASE(MOPRR4)
+  NODE_NAME_CASE(MOPRR5)
+  NODE_NAME_CASE(MOPRR6)
+  NODE_NAME_CASE(MOPRR7)
   NODE_NAME_CASE(SHA256SIG0)
   NODE_NAME_CASE(SHA256SIG1)
   NODE_NAME_CASE(SHA256SUM0)
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.h b/llvm/lib/Target/RISCV/RISCVISelLowering.h
index 18f58057558166..4fa8e50d141b51 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.h
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.h
@@ -142,6 +142,13 @@ enum NodeType : unsigned {
   SM4KS, SM4ED,
   SM3P0, SM3P1,
 
+  // May-Be-Operations
+  MOPR0, MOPR1, MOPR2, MOPR3, MOPR4, MOPR5, MOPR6, MOPR7, MOPR8, MOPR9, MOPR10,
+  MOPR11, MOPR12, MOPR13, MOPR14, MOPR15, MOPR16, MOPR17, MOPR18, MOPR19,
+  MOPR20, MOPR21, MOPR22, MOPR23, MOPR24, MOPR25, MOPR26, MOPR27, MOPR28,
+  MOPR29, MOPR30, MOPR31, MOPRR0, MOPRR1, MOPRR2, MOPRR3, MOPRR4, MOPRR5,
+  MOPRR6, MOPRR7,
+
   // Vector Extension
   FIRST_VL_VECTOR_OP,
   // VMV_V_V_VL matches the semantics of vmv.v.v but includes an extra operand
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoZimop.td b/llvm/lib/Target/RISCV/RISCVInstrInfoZimop.td
index 1e8c70046c6347..8f60b1badcb4af 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfoZimop.td
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfoZimop.td
@@ -34,6 +34,15 @@ class RVInstRMoprr<bits<4> imm4, bits<3> imm3, bits<3> funct3, RISCVOpcode opcod
   let Inst{25} = imm4{0};
 }
 
+foreach i = 0...31 in {
+  defvar riscvisd_moprx = "RISCVISD::MOPR"#i;
+  def riscv_mopr#i : SDNode<riscvisd_moprx,  SDTIntUnaryOp>;
+}
+foreach i = 0...7 in {
+  defvar riscvisd_moprrx = "RISCVISD::MOPRR"#i;
+  def riscv_moprr#i : SDNode<riscvisd_moprrx,  SDTIntBinOp>;
+}
+
 let hasSideEffects = 0, mayLoad = 0, mayStore = 0 in
 class RVMopr<bits<7> imm7, bits<5> imm5, bits<3> funct3,
              RISCVOpcode opcode, string opcodestr>
@@ -57,3 +66,22 @@ foreach i = 0...7 in {
   def MOPRR#i : RVMoprr<0b1001, i, 0b100, OPC_SYSTEM, "mop.rr."#i>,
                 Sched<[]>;
 }
+
+// Zimop instructions
+foreach i = 0...31 in {
+    defvar moprx = !cast<Instruction>("MOPR"#i);
+    defvar riscv_moprx = !cast<SDNode>("riscv_mopr"#i);
+    let Predicates = [HasStdExtZimop] in {
+    def : Pat<(XLenVT (riscv_moprx (XLenVT GPR:$rs1))),
+              (moprx GPR:$rs1)>;
+    } // Predicates = [HasStdExtZimop]
+}
+
+foreach i = 0...7 in {
+    defvar moprrx = !cast<Instruction>("MOPRR"#i);
+    defvar riscv_moprrx = !cast<SDNode>("riscv_moprr"#i);
+    let Predicates = [HasStdExtZimop] in {
+    def : Pat<(XLenVT (riscv_moprrx (XLenVT GPR:$rs1), (XLenVT GPR:$rs2))),
+              (moprrx GPR:$rs1, GPR:$rs2)>;
+    } // Predicates = [HasStdExtZimop]
+}
diff --git a/llvm/test/CodeGen/RISCV/rv32zimop-intrinsic.ll b/llvm/test/CodeGen/RISCV/rv32zimop-intrinsic.ll
new file mode 100644
index 00000000000000..bd1e369fc747a4
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/rv32zimop-intrinsic.ll
@@ -0,0 +1,48 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=riscv32 -mattr=+experimental-zimop -verify-machineinstrs < %s \
+; RUN:   | FileCheck %s -check-prefix=RV32ZIMOP
+
+declare i32 @llvm.riscv.mopr0.i32(i32 %a)
+
+define i32 @mopr0_32(i32 %a) nounwind {
+; RV32ZIMOP-LABEL: mopr0_32:
+; RV32ZIMOP:       # %bb.0:
+; RV32ZIMOP-NEXT:    mop.r.0 a0, a0
+; RV32ZIMOP-NEXT:    ret
+  %tmp = call i32 @llvm.riscv.mopr0.i32(i32 %a)
+  ret i32 %tmp
+}
+
+declare i32 @llvm.riscv.mopr31.i32(i32 %a)
+
+define i32 @mopr31_32(i32 %a) nounwind {
+; RV32ZIMOP-LABEL: mopr31_32:
+; RV32ZIMOP:       # %bb.0:
+; RV32ZIMOP-NEXT:    mop.r.31 a0, a0
+; RV32ZIMOP-NEXT:    ret
+  %tmp = call i32 @llvm.riscv.mopr31.i32(i32 %a)
+  ret i32 %tmp
+}
+
+declare i32 @llvm.riscv.moprr0.i32(i32 %a, i32 %b)
+
+define i32 @moprr0_32(i32 %a, i32 %b) nounwind {
+; RV32ZIMOP-LABEL: moprr0_32:
+; RV32ZIMOP:       # %bb.0:
+; RV32ZIMOP-NEXT:    mop.rr.0 a0, a0, a1
+; RV32ZIMOP-NEXT:    ret
+  %tmp = call i32 @llvm.riscv.moprr0.i32(i32 %a, i32 %b)
+  ret i32 %tmp
+}
+
+declare i32 @llvm.riscv.moprr7.i32(i32 %a, i32 %b)
+
+define i32 @moprr7_32(i32 %a, i32 %b) nounwind {
+; RV32ZIMOP-LABEL: moprr7_32:
+; RV32ZIMOP:       # %bb.0:
+; RV32ZIMOP-NEXT:    mop.rr.7 a0, a0, a1
+; RV32ZIMOP-NEXT:    ret
+  %tmp = call i32 @llvm.riscv.moprr7.i32(i32 %a, i32 %b)
+  ret i32 %tmp
+}
+
diff --git a/llvm/test/CodeGen/RISCV/rv64zimop-intrinsic.ll b/llvm/test/CodeGen/RISCV/rv64zimop-intrinsic.ll
new file mode 100644
index 00000000000000..209aad89cbc29e
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/rv64zimop-intrinsic.ll
@@ -0,0 +1,97 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc -mtriple=riscv64 -mattr=+experimental-zimop -verify-machineinstrs < %s \
+; RUN:   | FileCheck %s -check-prefix=RV64ZIMOP
+
+declare i64 @llvm.riscv.mopr0.i64(i64 %a)
+
+define i64 @mopr0_64(i64 %a) nounwind {
+; RV64ZIMOP-LABEL: mopr0_64:
+; RV64ZIMOP:       # %bb.0:
+; RV64ZIMOP-NEXT:    mop.r.0 a0, a0
+; RV64ZIMOP-NEXT:    ret
+  %tmp = call i64 @llvm.riscv.mopr0.i64(i64 %a)
+  ret i64 %tmp
+}
+
+declare i64 @llvm.riscv.mopr31.i64(i64 %a)
+
+define i64 @mopr31_64(i64 %a) nounwind {
+; RV64ZIMOP-LABEL: mopr31_64:
+; RV64ZIMOP:       # %bb.0:
+; RV64ZIMOP-NEXT:    mop.r.31 a0, a0
+; RV64ZIMOP-NEXT:    ret
+  %tmp = call i64 @llvm.riscv.mopr31.i64(i64 %a)
+  ret i64 %tmp
+}
+
+declare i64 @llvm.riscv.moprr0.i64(i64 %a, i64 %b)
+
+define i64 @moprr0_64(i64 %a, i64 %b) nounwind {
+; RV64ZIMOP-LABEL: moprr0_64:
+; RV64ZIMOP:       # %bb.0:
+; RV64ZIMOP-NEXT:    mop.rr.0 a0, a0, a1
+; RV64ZIMOP-NEXT:    ret
+  %tmp = call i64 @llvm.riscv.moprr0.i64(i64 %a, i64 %b)
+  ret i64 %tmp
+}
+
+declare i64 @llvm.riscv.moprr7.i64(i64 %a, i64 %b)
+
+define i64 @moprr7_64(i64 %a, i64 %b) nounwind {
+; RV64ZIMOP-LABEL: moprr7_64:
+; RV64ZIMOP:       # %bb.0:
+; RV64ZIMOP-NEXT:    mop.rr.7 a0, a0, a1
+; RV64ZIMOP-NEXT:    ret
+  %tmp = call i64 @llvm.riscv.moprr7.i64(i64 %a, i64 %b)
+  ret i64 %tmp
+}
+
+
+declare i32 @llvm.riscv.mopr0.i32(i32 %a)
+
+define signext i32 @mopr0_32(i32 signext %a) nounwind {
+; RV64ZIMOP-LABEL: mopr0_32:
+; RV64ZIMOP:       # %bb.0:
+; RV64ZIMOP-NEXT:    mop.r.0 a0, a0
+; RV64ZIMOP-NEXT:    sext.w a0, a0
+; RV64ZIMOP-NEXT:    ret
+  %tmp = call i32 @llvm.riscv.mopr0.i32(i32 %a)
+  ret i32 %tmp
+}
+
+declare i32 @llvm.riscv.mopr31.i32(i32 %a)
+
+define signext i32 @mopr31_32(i32 signext %a) nounwind {
+; RV64ZIMOP-LABEL: mopr31_32:
+; RV64ZIMOP:       # %bb.0:
+; RV64ZIMOP-NEXT:    mop.r.31 a0, a0
+; RV64ZIMOP-NEXT:    sext.w a0, a0
+; RV64ZIMOP-NEXT:    ret
+  %tmp = call i32 @llvm.riscv.mopr31.i32(i32 %a)
+  ret i32 %tmp
+}
+
+declare i32 @llvm.riscv.moprr0.i32(i32 %a, i32 %b)
+
+define signext i32 @moprr0_32(i32 signext %a, i32 signext %b) nounwind {
+; RV64ZIMOP-LABEL: moprr0_32:
+; RV64ZIMOP:       # %bb.0:
+; RV64ZIMOP-NEXT:    mop.rr.0 a0, a0, a1
+; RV64ZIMOP-NEXT:    sext.w a0, a0
+; RV64ZIMOP-NEXT:    ret
+  %tmp = call i32 @llvm.riscv.moprr0.i32(i32 %a, i32 %b)
+  ret i32 %tmp
+}
+
+declare i32 @llvm.riscv.moprr7.i32(i32 %a, i32 %b)
+
+define signext i32 @moprr7_32(i32 signext %a, i32 signext %b) nounwind {
+; RV64ZIMOP-LABEL: moprr7_32:
+; RV64ZIMOP:       # %bb.0:
+; RV64ZIMOP-NEXT:    mop.rr.7 a0, a0, a1
+; RV64ZIMOP-NEXT:    sext.w a0, a0
+; RV64ZIMOP-NEXT:    ret
+  %tmp = call i32 @llvm.riscv.moprr7.i32(i32 %a, i32 %b)
+  ret i32 %tmp
+}
+

@JivanH
Copy link
Contributor Author

JivanH commented Jan 5, 2024

Please review
@topperc
@wangpc-pp
@dtcxzyw

@dtcxzyw dtcxzyw requested review from wangpc-pp and topperc and removed request for wangpc-pp January 5, 2024 13:18
[IntrNoMem, IntrSpeculatable]>;

// Zimop
foreach i = 0...31 in {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is indented 3 spaces

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

case Intrinsic::riscv_##NAME: { \
if (RV64LegalI32 && Subtarget.is64Bit() && \
Op.getValueType() == MVT::i32) { \
SDValue NewOp0 = \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure MOP intrinsic should have 32 bit forms on RV64. Without knowing what real operation the maybe-op becomes, it's not clear than ANY_EXTEND is the right behavior. But that's a problem for the intrinsic spec so I guess I'll raise it over there.

class MOPGPRIntrinsics
: DefaultAttrsIntrinsic<[llvm_any_ty],
[LLVMMatchType<0>],
[IntrNoMem, IntrSpeculatable]>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These properties make these intrinsics useless for implementing any of the defined overrides of may-be-operations like SSPUSH or SSPOPCHK. But that's how the intrinsic spec is written so this is fine.

@wangpc-pp
Copy link
Contributor

wangpc-pp commented Jan 8, 2024

Just a thought: can we just have one intrinsic each for mop.r and mop.rr? The number becomes a constant integer argument of these two intrinsics. If so, we can reduce some cases in legalization/lowering. And of course, we should change the ISel logics.

@topperc
Copy link
Collaborator

topperc commented Jan 8, 2024

Just a thought: can we just have only one intrinsic for mop.r and mop.rr? The number becomes an constant integer argument of these two intrinsics. If so, we can reduce some cases in legalization/lowering. And of course, we should change the ISel logics.

How would we represent the non-existent second source register for mop.r?

@wangpc-pp
Copy link
Contributor

Just a thought: can we just have only one intrinsic for mop.r and mop.rr? The number becomes an constant integer argument of these two intrinsics. If so, we can reduce some cases in legalization/lowering. And of course, we should change the ISel logics.

How would we represent the non-existent second source register for mop.r?

My expression is not clear. I meant one intrinsic for mop.r and another for mop.rr, there are two intrinsics. :-)

@topperc
Copy link
Collaborator

topperc commented Jan 8, 2024

Just a thought: can we just have only one intrinsic for mop.r and mop.rr? The number becomes an constant integer argument of these two intrinsics. If so, we can reduce some cases in legalization/lowering. And of course, we should change the ISel logics.

How would we represent the non-existent second source register for mop.r?

My expression is not clear. I meant one intrinsic for mop.r and another for mop.rr, there are two intrinsics. :-)

Thanks. I agree.

@topperc
Copy link
Collaborator

topperc commented Jan 8, 2024

This implements ISel support for mopr[0-31] and moprr[0-8] instructions for 32 and 64 bits

Should that be moprr[0-7]?

class MOPGPRIntrinsics
: DefaultAttrsIntrinsic<[llvm_any_ty],
[LLVMMatchType<0>, LLVMMatchType<0>],
[IntrNoMem, IntrSpeculatable]>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need ImmArg<ArgIndex<1>>

class MOPGPRGPRIntrinsics
: DefaultAttrsIntrinsic<[llvm_any_ty],
[LLVMMatchType<0>, LLVMMatchType<0>, LLVMMatchType<0>],
[IntrNoMem, IntrSpeculatable]>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need ImmArg<ArgIndex<2>>

} \
return DAG.getNode(OPCODE, DL, XLenVT, Op.getOperand(1)); \
}
RISCV_MOPR_64_CASE(0, RISCVISD::MOPR0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add RISCVISD::MOPRR and RISCVISD::MOPR and do the immediate matching in tblgen patterns.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you going to make this change?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tried to change it as you asked. Unfortunately, I had some trouble with it. I am not very competent in tblgen syntax and could not get corresponding instruction which depends on the third operand. If you can give an example it will be nice.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took this patch and modified it here topperc@401f1e6

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess @ln8-8 tried to append the third argument to the instruction name and avoid defining 8/32 patterns for mapr/moprr instructions.
Did something like this

def : Pat<(XLenVT (riscv_mopr GPR:$rs1, uimm5:$imm)),
            (!cast<Instruction>("MOPR"#!repr($imm->getSExtValue())) GPR:$rs1)>;

Anyway, Thank you.

@JivanH
Copy link
Contributor Author

JivanH commented Jan 9, 2024

This implements ISel support for mopr[0-31] and moprr[0-8] instructions for 32 and 64 bits

Should that be moprr[0-7]?

Oh, yes. Edited comment.

Copy link
Collaborator

@topperc topperc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@wangpc-pp wangpc-pp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with nits.

llvm/lib/Target/RISCV/RISCVInstrInfoZimop.td Outdated Show resolved Hide resolved
llvm/include/llvm/IR/IntrinsicsRISCV.td Outdated Show resolved Hide resolved
@topperc
Copy link
Collaborator

topperc commented Jan 25, 2024

Do you need someone to commit this for you?

@JivanH
Copy link
Contributor Author

JivanH commented Jan 25, 2024

Do you need someone to commit this for you?

Yes, it will be nice. We don't have write access.

@topperc topperc merged commit 0461448 into llvm:main Jan 29, 2024
3 of 4 checks passed
@JivanH JivanH deleted the zimop_ISel branch January 30, 2024 09:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants