Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RISCV][GISEL] Add support for lowerFormalArguments that contain scalable vector types #70882

Merged
merged 10 commits into from
Nov 14, 2023

Conversation

michaelmaitland
Copy link
Contributor

@michaelmaitland michaelmaitland commented Nov 1, 2023

Scalable vector types from LLVM IR can be lowered to scalable vector
types in MIR according to the RISCVAssignFn.

This patch is stacked on #70881

@llvmbot
Copy link
Collaborator

llvmbot commented Nov 1, 2023

@llvm/pr-subscribers-backend-risc-v

@llvm/pr-subscribers-llvm-globalisel

Author: Michael Maitland (michaelmaitland)

Changes

Scalable vector types from LLVM IR can be lowered to scalable vector
types in MIR according to the RISCV::CC_RISCV function.

This patch is stacked on #70881


Patch is 26.97 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/70882.diff

6 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/TargetRegisterInfo.h (+3-3)
  • (modified) llvm/lib/CodeGen/LowLevelType.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/MachineVerifier.cpp (+17-5)
  • (modified) llvm/lib/CodeGen/TargetRegisterInfo.cpp (+9-10)
  • (modified) llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp (+7)
  • (added) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-args.ll (+693)
diff --git a/llvm/include/llvm/CodeGen/TargetRegisterInfo.h b/llvm/include/llvm/CodeGen/TargetRegisterInfo.h
index 5bf27e40eee8909..3f64bf972daf21e 100644
--- a/llvm/include/llvm/CodeGen/TargetRegisterInfo.h
+++ b/llvm/include/llvm/CodeGen/TargetRegisterInfo.h
@@ -278,8 +278,8 @@ class TargetRegisterInfo : public MCRegisterInfo {
   // DenseMapInfo<unsigned> uses -1u and -2u.
 
   /// Return the size in bits of a register from class RC.
-  unsigned getRegSizeInBits(const TargetRegisterClass &RC) const {
-    return getRegClassInfo(RC).RegSize;
+  TypeSize getRegSizeInBits(const TargetRegisterClass &RC) const {
+    return TypeSize::Fixed(getRegClassInfo(RC).RegSize);
   }
 
   /// Return the size in bytes of the stack slot allocated to hold a spilled
@@ -853,7 +853,7 @@ class TargetRegisterInfo : public MCRegisterInfo {
     const TargetRegisterClass *RC) const = 0;
 
   /// Returns size in bits of a phys/virtual/generic register.
-  unsigned getRegSizeInBits(Register Reg, const MachineRegisterInfo &MRI) const;
+  TypeSize getRegSizeInBits(Register Reg, const MachineRegisterInfo &MRI) const;
 
   /// Get the weight in units of pressure for this register unit.
   virtual unsigned getRegUnitWeight(unsigned RegUnit) const = 0;
diff --git a/llvm/lib/CodeGen/LowLevelType.cpp b/llvm/lib/CodeGen/LowLevelType.cpp
index 24c30b756737b20..cd85bf606989f9e 100644
--- a/llvm/lib/CodeGen/LowLevelType.cpp
+++ b/llvm/lib/CodeGen/LowLevelType.cpp
@@ -17,7 +17,7 @@ using namespace llvm;
 
 LLT::LLT(MVT VT) {
   if (VT.isVector()) {
-    bool asVector = VT.getVectorMinNumElements() > 1;
+    bool asVector = VT.getVectorMinNumElements() > 1 || VT.isScalableVector();
     init(/*IsPointer=*/false, asVector, /*IsScalar=*/!asVector,
          VT.getVectorElementCount(), VT.getVectorElementType().getSizeInBits(),
          /*AddressSpace=*/0);
diff --git a/llvm/lib/CodeGen/MachineVerifier.cpp b/llvm/lib/CodeGen/MachineVerifier.cpp
index dadaf60fa09da04..9837a93d8339974 100644
--- a/llvm/lib/CodeGen/MachineVerifier.cpp
+++ b/llvm/lib/CodeGen/MachineVerifier.cpp
@@ -1937,8 +1937,9 @@ void MachineVerifier::visitMachineInstrBefore(const MachineInstr *MI) {
 
     // If we have only one valid type, this is likely a copy between a virtual
     // and physical register.
-    unsigned SrcSize = 0;
-    unsigned DstSize = 0;
+    TypeSize SrcSize = TRI->getRegSizeInBits(SrcReg, *MRI);
+    TypeSize DstSize = TRI->getRegSizeInBits(DstReg, *MRI);
+
     if (SrcReg.isPhysical() && DstTy.isValid()) {
       const TargetRegisterClass *SrcRC =
           TRI->getMinimalPhysRegClassLLT(SrcReg, DstTy);
@@ -1946,7 +1947,7 @@ void MachineVerifier::visitMachineInstrBefore(const MachineInstr *MI) {
         SrcSize = TRI->getRegSizeInBits(*SrcRC);
     }
 
-    if (SrcSize == 0)
+    if (SrcSize.isZero())
       SrcSize = TRI->getRegSizeInBits(SrcReg, *MRI);
 
     if (DstReg.isPhysical() && SrcTy.isValid()) {
@@ -1956,10 +1957,21 @@ void MachineVerifier::visitMachineInstrBefore(const MachineInstr *MI) {
         DstSize = TRI->getRegSizeInBits(*DstRC);
     }
 
-    if (DstSize == 0)
+    if (DstSize.isZero())
       DstSize = TRI->getRegSizeInBits(DstReg, *MRI);
 
-    if (SrcSize != 0 && DstSize != 0 && SrcSize != DstSize) {
+    // If the Dst is scalable and the Src is fixed, then the Dst can only hold
+    // the Src if the minimum size Dst can hold is at least as big as Src.
+    if (DstSize.isScalable() && !SrcSize.isScalable() &&
+        DstSize.getKnownMinValue() <= SrcSize.getFixedValue())
+      break;
+    // If the Src is scalable and the Dst is fixed, then Dest can only hold
+    // the Src is known to fit in Dest
+    if (SrcSize.isScalable() && !DstSize.isScalable() &&
+        TypeSize::isKnownLE(DstSize, SrcSize))
+      break;
+
+    if (SrcSize.isNonZero() && DstSize.isNonZero() && SrcSize != DstSize) {
       if (!DstOp.getSubReg() && !SrcOp.getSubReg()) {
         report("Copy Instruction is illegal with mismatching sizes", MI);
         errs() << "Def Size = " << DstSize << ", Src Size = " << SrcSize
diff --git a/llvm/lib/CodeGen/TargetRegisterInfo.cpp b/llvm/lib/CodeGen/TargetRegisterInfo.cpp
index 1bb35f40facfd0f..c50b1cf9422717a 100644
--- a/llvm/lib/CodeGen/TargetRegisterInfo.cpp
+++ b/llvm/lib/CodeGen/TargetRegisterInfo.cpp
@@ -499,7 +499,7 @@ bool TargetRegisterInfo::regmaskSubsetEqual(const uint32_t *mask0,
   return true;
 }
 
-unsigned
+TypeSize
 TargetRegisterInfo::getRegSizeInBits(Register Reg,
                                      const MachineRegisterInfo &MRI) const {
   const TargetRegisterClass *RC{};
@@ -508,16 +508,15 @@ TargetRegisterInfo::getRegSizeInBits(Register Reg,
     // Instead, we need to access a register class that contains Reg and
     // get the size of that register class.
     RC = getMinimalPhysRegClass(Reg);
-  } else {
-    LLT Ty = MRI.getType(Reg);
-    unsigned RegSize = Ty.isValid() ? Ty.getSizeInBits() : 0;
-    // If Reg is not a generic register, query the register class to
-    // get its size.
-    if (RegSize)
-      return RegSize;
-    // Since Reg is not a generic register, it must have a register class.
-    RC = MRI.getRegClass(Reg);
+    assert(RC && "Unable to deduce the register class");
+    return getRegSizeInBits(*RC);
   }
+  LLT Ty = MRI.getType(Reg);
+  if (Ty.isValid())
+    return Ty.getSizeInBits();
+
+  // Since Reg is not a generic register, it may have a register class.
+  RC = MRI.getRegClass(Reg);
   assert(RC && "Unable to deduce the register class");
   return getRegSizeInBits(*RC);
 }
diff --git a/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp b/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp
index 1806fcf4b5a1664..1fb9c1136addd03 100644
--- a/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp
+++ b/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp
@@ -14,6 +14,7 @@
 
 #include "RISCVCallLowering.h"
 #include "RISCVISelLowering.h"
+#include "RISCVMachineFunctionInfo.h"
 #include "RISCVSubtarget.h"
 #include "llvm/CodeGen/Analysis.h"
 #include "llvm/CodeGen/GlobalISel/MachineIRBuilder.h"
@@ -181,6 +182,9 @@ struct RISCVIncomingValueAssigner : public CallLowering::IncomingValueAssigner {
     const DataLayout &DL = MF.getDataLayout();
     const RISCVSubtarget &Subtarget = MF.getSubtarget<RISCVSubtarget>();
 
+    if (LocVT.isScalableVector())
+      MF.getInfo<RISCVMachineFunctionInfo>()->setIsVectorCall();
+
     return RISCVAssignFn(DL, Subtarget.getTargetABI(), ValNo, ValVT, LocVT,
                          LocInfo, Flags, State, /*IsFixed=*/true, IsRet,
                          Info.Ty, *Subtarget.getTargetLowering(),
@@ -303,6 +307,9 @@ static bool isSupportedArgumentType(Type *T, const RISCVSubtarget &Subtarget) {
     return true;
   if (T->isPointerTy())
     return true;
+  // TODO: Support fixed vector types.
+  if (T->isVectorTy() && T->isScalableTy() && Subtarget.hasVInstructions())
+    return true;
   return false;
 }
 
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-args.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-args.ll
new file mode 100644
index 000000000000000..8dfe00f678622fe
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-args.ll
@@ -0,0 +1,693 @@
+; NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
+; RUN: llc -mtriple=riscv32 -mattr=+v -global-isel -stop-after=irtranslator \
+; RUN:   -verify-machineinstrs < %s | FileCheck -check-prefix=RV32 %s
+; RUN: llc -mtriple=riscv64 -mattr=+v -global-isel -stop-after=irtranslator \
+; RUN:   -verify-machineinstrs < %s | FileCheck -check-prefix=RV64 %s
+
+; ==========================================================================
+; ============================= Scalable Types =============================
+; ==========================================================================
+
+define void @test_args_nxv1i8(<vscale x 1 x i8> %a) {
+  ; RV32-LABEL: name: test_args_nxv1i8
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 1 x s8>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv1i8
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 1 x s8>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv2i8(<vscale x 2 x i8> %a) {
+  ; RV32-LABEL: name: test_args_nxv2i8
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 2 x s8>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv2i8
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 2 x s8>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv4i8(<vscale x 4 x i8> %a) {
+  ; RV32-LABEL: name: test_args_nxv4i8
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 4 x s8>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv4i8
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 4 x s8>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv8i8(<vscale x 8 x i8> %a) {
+  ; RV32-LABEL: name: test_args_nxv8i8
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 8 x s8>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv8i8
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 8 x s8>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv16i8(<vscale x 16 x i8> %a) {
+  ; RV32-LABEL: name: test_args_nxv16i8
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m2
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 16 x s8>) = COPY $v8m2
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv16i8
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m2
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 16 x s8>) = COPY $v8m2
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv32i8(<vscale x 32 x i8> %a) {
+  ; RV32-LABEL: name: test_args_nxv32i8
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m4
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 32 x s8>) = COPY $v8m4
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv32i8
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m4
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 32 x s8>) = COPY $v8m4
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv64i8(<vscale x 64 x i8> %a) {
+  ; RV32-LABEL: name: test_args_nxv64i8
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 64 x s8>) = COPY $v8m8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv64i8
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 64 x s8>) = COPY $v8m8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv1i16(<vscale x 1 x i16> %a) {
+  ; RV32-LABEL: name: test_args_nxv1i16
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 1 x s16>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv1i16
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 1 x s16>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv2i16(<vscale x 2 x i16> %a) {
+  ; RV32-LABEL: name: test_args_nxv2i16
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 2 x s16>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv2i16
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 2 x s16>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv4i16(<vscale x 4 x i16> %a) {
+  ; RV32-LABEL: name: test_args_nxv4i16
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 4 x s16>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv4i16
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 4 x s16>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv8i16(<vscale x 8 x i16> %a) {
+  ; RV32-LABEL: name: test_args_nxv8i16
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m2
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 8 x s16>) = COPY $v8m2
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv8i16
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m2
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 8 x s16>) = COPY $v8m2
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv16i16(<vscale x 16 x i16> %a) {
+  ; RV32-LABEL: name: test_args_nxv16i16
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m4
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 16 x s16>) = COPY $v8m4
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv16i16
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m4
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 16 x s16>) = COPY $v8m4
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv32i16(<vscale x 32 x i16> %a) {
+  ; RV32-LABEL: name: test_args_nxv32i16
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 32 x s16>) = COPY $v8m8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv32i16
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 32 x s16>) = COPY $v8m8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv1i32(<vscale x 1 x i32> %a) {
+  ; RV32-LABEL: name: test_args_nxv1i32
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 1 x s32>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv1i32
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 1 x s32>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv2i32(<vscale x 2 x i32> %a) {
+  ; RV32-LABEL: name: test_args_nxv2i32
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 2 x s32>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv2i32
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 2 x s32>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv4i32(<vscale x 4 x i32> %a) {
+  ; RV32-LABEL: name: test_args_nxv4i32
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m2
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 4 x s32>) = COPY $v8m2
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv4i32
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m2
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 4 x s32>) = COPY $v8m2
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv8i32(<vscale x 8 x i32> %a) {
+  ; RV32-LABEL: name: test_args_nxv8i32
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m4
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 8 x s32>) = COPY $v8m4
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv8i32
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m4
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 8 x s32>) = COPY $v8m4
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv16i32(<vscale x 16 x i32> %a) {
+  ; RV32-LABEL: name: test_args_nxv16i32
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 16 x s32>) = COPY $v8m8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv16i32
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 16 x s32>) = COPY $v8m8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv1i64(<vscale x 1 x i64> %a) {
+  ; RV32-LABEL: name: test_args_nxv1i64
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 1 x s64>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv1i64
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 1 x s64>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv2i64(<vscale x 2 x i64> %a) {
+  ; RV32-LABEL: name: test_args_nxv2i64
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m2
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 2 x s64>) = COPY $v8m2
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv2i64
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m2
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 2 x s64>) = COPY $v8m2
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv4i64(<vscale x 4 x i64> %a) {
+  ; RV32-LABEL: name: test_args_nxv4i64
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m4
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 4 x s64>) = COPY $v8m4
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv4i64
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m4
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 4 x s64>) = COPY $v8m4
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv8i64(<vscale x 8 x i64> %a) {
+  ; RV32-LABEL: name: test_args_nxv8i64
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8m8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 8 x s64>) = COPY $v8m8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv8i64
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8m8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 8 x s64>) = COPY $v8m8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv64i1(<vscale x 64 x i1> %a) {
+  ; RV32-LABEL: name: test_args_nxv64i1
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 64 x s1>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv64i1
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 64 x s1>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv32i1(<vscale x 32 x i1> %a) {
+  ; RV32-LABEL: name: test_args_nxv32i1
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+  ; RV32-NEXT: {{  $}}
+  ; RV32-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 32 x s1>) = COPY $v8
+  ; RV32-NEXT:   PseudoRET
+  ;
+  ; RV64-LABEL: name: test_args_nxv32i1
+  ; RV64: bb.1.entry:
+  ; RV64-NEXT:   liveins: $v8
+  ; RV64-NEXT: {{  $}}
+  ; RV64-NEXT:   [[COPY:%[0-9]+]]:_(<vscale x 32 x s1>) = COPY $v8
+  ; RV64-NEXT:   PseudoRET
+entry:
+  ret void
+}
+
+define void @test_args_nxv16i1(<vscale x 16 x i1> %a) {
+  ; RV32-LABEL: name: test_args_nxv16i1
+  ; RV32: bb.1.entry:
+  ; RV32-NEXT:   liveins: $v8
+ ...
[truncated]

Copy link

github-actions bot commented Nov 1, 2023

✅ With the latest revision this PR passed the C/C++ code formatter.

llvm/lib/CodeGen/GlobalISel/CallLowering.cpp Outdated Show resolved Hide resolved
llvm/lib/CodeGen/GlobalISel/MachineIRBuilder.cpp Outdated Show resolved Hide resolved
llvm/lib/CodeGen/MachineVerifier.cpp Outdated Show resolved Hide resolved
@michaelmaitland
Copy link
Contributor Author

ping

Copy link
Collaborator

@topperc topperc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@michaelmaitland michaelmaitland merged commit a7bbcc4 into llvm:main Nov 14, 2023
3 checks passed
zahiraam pushed a commit to zahiraam/llvm-project that referenced this pull request Nov 20, 2023
…able vector types (llvm#70882)

Scalable vector types from LLVM IR can be lowered to scalable vector
types in MIR according to the RISCVAssignFn.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants