Skip to content

Conversation

@Lukacma
Copy link
Contributor

@Lukacma Lukacma commented Oct 29, 2025

This is followup patch to #157680, which allows simd fpcvt instructions to be generated from l/llround and l/llrint nodes.

@llvmbot
Copy link
Member

llvmbot commented Oct 29, 2025

@llvm/pr-subscribers-backend-aarch64

@llvm/pr-subscribers-llvm-globalisel

Author: None (Lukacma)

Changes

This is followup patch to #157680, which allows simd fpcvt instructions to be generated from l/llround and l/llrint nodes.


Patch is 23.23 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/165546.diff

6 Files Affected:

  • (modified) llvm/lib/Target/AArch64/AArch64InstrInfo.td (+73)
  • (modified) llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp (+5-13)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir (+2-2)
  • (added) llvm/test/CodeGen/AArch64/arm64-cvt-simd-round-rint.ll (+428)
  • (modified) llvm/test/CodeGen/AArch64/vector-lrint.ll (+12-6)
diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.td b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
index b9e299ef37454..f765c6e037176 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
@@ -6799,6 +6799,79 @@ defm : FPToIntegerPats<fp_to_uint, fp_to_uint_sat, fp_to_uint_sat_gi, ftrunc, "F
 defm : FPToIntegerPats<fp_to_sint, fp_to_sint_sat, fp_to_sint_sat_gi, fround, "FCVTAS">;
 defm : FPToIntegerPats<fp_to_uint, fp_to_uint_sat, fp_to_uint_sat_gi, fround, "FCVTAU">;
 
+// For global-isel we can use register classes to determine
+// which FCVT instruction to use.
+let Predicates = [HasFPRCVT] in {
+def : Pat<(i64 (any_lround f32:$Rn)),
+          (FCVTASDSr f32:$Rn)>;
+def : Pat<(i64 (any_llround f32:$Rn)),
+          (FCVTASDSr f32:$Rn)>;
+}
+def : Pat<(i64 (any_lround f64:$Rn)),
+          (FCVTASv1i64 f64:$Rn)>;
+def : Pat<(i64 (any_llround f64:$Rn)),
+          (FCVTASv1i64 f64:$Rn)>;
+
+let Predicates = [HasFPRCVT] in {
+  def : Pat<(f32 (bitconvert (i32 (any_lround f16:$Rn)))),
+            (FCVTASSHr f16:$Rn)>;
+  def : Pat<(f64 (bitconvert (i64 (any_lround f16:$Rn)))),
+            (FCVTASDHr f16:$Rn)>;
+  def : Pat<(f64 (bitconvert (i64 (any_llround f16:$Rn)))),
+            (FCVTASDHr f16:$Rn)>;
+  def : Pat<(f64 (bitconvert (i64 (any_lround f32:$Rn)))),
+            (FCVTASDSr f32:$Rn)>;
+  def : Pat<(f32 (bitconvert (i32 (any_lround f64:$Rn)))),
+            (FCVTASSDr f64:$Rn)>;
+  def : Pat<(f64 (bitconvert (i64 (any_llround f32:$Rn)))),
+            (FCVTASDSr f32:$Rn)>;
+}
+def : Pat<(f32 (bitconvert (i32 (any_lround f32:$Rn)))),
+          (FCVTASv1i32 f32:$Rn)>;
+def : Pat<(f64 (bitconvert (i64 (any_lround f64:$Rn)))),
+          (FCVTASv1i64 f64:$Rn)>;
+def : Pat<(f64 (bitconvert (i64 (any_llround f64:$Rn)))),
+          (FCVTASv1i64 f64:$Rn)>;
+
+// For global-isel we can use register classes to determine
+// which FCVT instruction to use.
+let Predicates = [HasFPRCVT] in {
+def : Pat<(i64 (any_lrint f16:$Rn)),
+          (FCVTZSDHr (FRINTXHr f16:$Rn))>;
+def : Pat<(i64 (any_llrint f16:$Rn)),
+          (FCVTZSDHr (FRINTXHr f16:$Rn))>;
+def : Pat<(i64 (any_lrint f32:$Rn)),
+          (FCVTZSDSr (FRINTXSr f32:$Rn))>;
+def : Pat<(i64 (any_llrint f32:$Rn)),
+          (FCVTZSDSr (FRINTXSr f32:$Rn))>;
+}
+def : Pat<(i64 (any_lrint f64:$Rn)),
+          (FCVTZSv1i64 (FRINTXDr f64:$Rn))>;
+def : Pat<(i64 (any_llrint f64:$Rn)),
+          (FCVTZSv1i64 (FRINTXDr f64:$Rn))>;
+
+let Predicates = [HasFPRCVT] in {
+  def : Pat<(f32 (bitconvert (i32 (any_lrint f16:$Rn)))),
+            (FCVTZSSHr (FRINTXHr f16:$Rn))>;
+  def : Pat<(f64 (bitconvert (i64 (any_lrint f16:$Rn)))),
+            (FCVTZSDHr (FRINTXHr f16:$Rn))>;
+  def : Pat<(f64 (bitconvert (i64 (any_llrint f16:$Rn)))),
+            (FCVTZSDHr (FRINTXHr f16:$Rn))>;
+  def : Pat<(f64 (bitconvert (i64 (any_lrint f32:$Rn)))),
+            (FCVTZSDSr (FRINTXSr f32:$Rn))>;
+  def : Pat<(f32 (bitconvert (i32 (any_lrint f64:$Rn)))),
+            (FCVTZSSDr (FRINTXDr f64:$Rn))>;
+  def : Pat<(f64 (bitconvert (i64 (any_llrint f32:$Rn)))),
+            (FCVTZSDSr (FRINTXSr f32:$Rn))>;
+}
+def : Pat<(f32 (bitconvert (i32 (any_lrint f32:$Rn)))),
+          (FCVTZSv1i32 (FRINTXSr f32:$Rn))>;
+def : Pat<(f64 (bitconvert (i64 (any_lrint f64:$Rn)))),
+          (FCVTZSv1i64 (FRINTXDr f64:$Rn))>;
+def : Pat<(f64 (bitconvert (i64 (any_llrint f64:$Rn)))),
+          (FCVTZSv1i64 (FRINTXDr f64:$Rn))>;
+
+
 // f16 -> s16 conversions
 let Predicates = [HasFullFP16] in {
   def : Pat<(i16(fp_to_sint_sat_gi f16:$Rn)), (FCVTZSv1f16 f16:$Rn)>;
diff --git a/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp b/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp
index 6d2d70511e894..8bd982898b8d6 100644
--- a/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp
+++ b/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp
@@ -858,7 +858,11 @@ AArch64RegisterBankInfo::getInstrMapping(const MachineInstr &MI) const {
   case TargetOpcode::G_FPTOSI_SAT:
   case TargetOpcode::G_FPTOUI_SAT:
   case TargetOpcode::G_FPTOSI:
-  case TargetOpcode::G_FPTOUI: {
+  case TargetOpcode::G_FPTOUI:
+  case TargetOpcode::G_INTRINSIC_LRINT:
+  case TargetOpcode::G_INTRINSIC_LLRINT:
+  case TargetOpcode::G_LROUND:
+  case TargetOpcode::G_LLROUND: {
     LLT DstType = MRI.getType(MI.getOperand(0).getReg());
     if (DstType.isVector())
       break;
@@ -879,12 +883,6 @@ AArch64RegisterBankInfo::getInstrMapping(const MachineInstr &MI) const {
       OpRegBankIdx = {PMI_FirstGPR, PMI_FirstFPR};
     break;
   }
-  case TargetOpcode::G_INTRINSIC_LRINT:
-  case TargetOpcode::G_INTRINSIC_LLRINT:
-    if (MRI.getType(MI.getOperand(0).getReg()).isVector())
-      break;
-    OpRegBankIdx = {PMI_FirstGPR, PMI_FirstFPR};
-    break;
   case TargetOpcode::G_FCMP: {
     // If the result is a vector, it must use a FPR.
     AArch64GenRegisterBankInfo::PartialMappingIdx Idx0 =
@@ -1224,12 +1222,6 @@ AArch64RegisterBankInfo::getInstrMapping(const MachineInstr &MI) const {
     }
     break;
   }
-  case TargetOpcode::G_LROUND:
-  case TargetOpcode::G_LLROUND: {
-    // Source is always floating point and destination is always integer.
-    OpRegBankIdx = {PMI_FirstGPR, PMI_FirstFPR};
-    break;
-  }
   }
 
   // Finally construct the computed mapping.
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir
index 420c7cfb07b74..16100f01017a6 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir
@@ -14,7 +14,7 @@ body:             |
     ; CHECK: liveins: $d0
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: %fpr:fpr(s64) = COPY $d0
-    ; CHECK-NEXT: %llround:gpr(s64) = G_LLROUND %fpr(s64)
+    ; CHECK-NEXT: %llround:fpr(s64) = G_LLROUND %fpr(s64)
     ; CHECK-NEXT: $d0 = COPY %llround(s64)
     ; CHECK-NEXT: RET_ReallyLR implicit $s0
     %fpr:_(s64) = COPY $d0
@@ -35,7 +35,7 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: %gpr:gpr(s64) = COPY $x0
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:fpr(s64) = COPY %gpr(s64)
-    ; CHECK-NEXT: %llround:gpr(s64) = G_LLROUND [[COPY]](s64)
+    ; CHECK-NEXT: %llround:fpr(s64) = G_LLROUND [[COPY]](s64)
     ; CHECK-NEXT: $d0 = COPY %llround(s64)
     ; CHECK-NEXT: RET_ReallyLR implicit $s0
     %gpr:_(s64) = COPY $x0
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir
index 775c6ca773c68..5cb93f7c4646d 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir
@@ -14,7 +14,7 @@ body:             |
     ; CHECK: liveins: $d0
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: %fpr:fpr(s64) = COPY $d0
-    ; CHECK-NEXT: %lround:gpr(s64) = G_LROUND %fpr(s64)
+    ; CHECK-NEXT: %lround:fpr(s64) = G_LROUND %fpr(s64)
     ; CHECK-NEXT: $d0 = COPY %lround(s64)
     ; CHECK-NEXT: RET_ReallyLR implicit $s0
     %fpr:_(s64) = COPY $d0
@@ -35,7 +35,7 @@ body:             |
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: %gpr:gpr(s64) = COPY $x0
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:fpr(s64) = COPY %gpr(s64)
-    ; CHECK-NEXT: %lround:gpr(s64) = G_LROUND [[COPY]](s64)
+    ; CHECK-NEXT: %lround:fpr(s64) = G_LROUND [[COPY]](s64)
     ; CHECK-NEXT: $d0 = COPY %lround(s64)
     ; CHECK-NEXT: RET_ReallyLR implicit $s0
     %gpr:_(s64) = COPY $x0
diff --git a/llvm/test/CodeGen/AArch64/arm64-cvt-simd-round-rint.ll b/llvm/test/CodeGen/AArch64/arm64-cvt-simd-round-rint.ll
new file mode 100644
index 0000000000000..000ff64131ccf
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/arm64-cvt-simd-round-rint.ll
@@ -0,0 +1,428 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc < %s -mtriple aarch64-unknown-unknown -mattr=+fprcvt,+fullfp16 | FileCheck %s --check-prefixes=CHECK,CHECK-SD
+; RUN: llc < %s -mtriple aarch64-unknown-unknown -global-isel -global-isel-abort=2 -mattr=+fprcvt,+fullfp16 2>&1 | FileCheck %s --check-prefixes=CHECK,CHECK-GI
+
+;  CHECK-GI: warning: Instruction selection used fallback path for lround_i32_f16_simd
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f16_simd
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f64_simd
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f32_simd
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f16_simd
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f16_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f16_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f32_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f64_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f32_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f64_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f16_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f32_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f64_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f16_simd
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f64_simd
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f32_simd
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f16_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i64_f16_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i64_f32_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f64_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f32_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i64_f64_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for llrint_i64_f16_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for llrint_i64_f32_simd_exp
+;  CHECK-GI-NEXT: warning: Instruction selection used fallback path for llrint_i64_f64_simd_exp
+
+;
+; (L/LL)Round
+;
+
+define float @lround_i32_f16_simd(half %x)  {
+; CHECK-LABEL: lround_i32_f16_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas s0, h0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.lround.i32.f16(half %x)
+  %sum = bitcast i32 %val to float
+  ret float %sum
+}
+
+define double @lround_i64_f16_simd(half %x)  {
+; CHECK-LABEL: lround_i64_f16_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, h0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.lround.i64.f16(half %x)
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define double @lround_i64_f32_simd(float %x)  {
+; CHECK-LABEL: lround_i64_f32_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, s0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.lround.i64.f32(float %x)
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define float @lround_i32_f64_simd(double %x)  {
+; CHECK-LABEL: lround_i32_f64_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas s0, d0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.lround.i32.f64(double %x)
+  %bc  = bitcast i32 %val to float
+  ret float %bc
+}
+
+define float @lround_i32_f32_simd(float %x)  {
+; CHECK-LABEL: lround_i32_f32_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas s0, s0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.lround.i32.f32(float %x)
+  %bc  = bitcast i32 %val to float
+  ret float %bc
+}
+
+define double @lround_i64_f64_simd(double %x)  {
+; CHECK-LABEL: lround_i64_f64_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, d0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.lround.i64.f64(double %x)
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define double @llround_i64_f16_simd(half %x)  {
+; CHECK-LABEL: llround_i64_f16_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, h0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.llround.i64.f16(half %x)
+  %sum = bitcast i64 %val to double
+  ret double %sum
+}
+
+define double @llround_i64_f32_simd(float %x)  {
+; CHECK-LABEL: llround_i64_f32_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, s0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.llround.i64.f32(float %x)
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define double @llround_i64_f64_simd(double %x)  {
+; CHECK-LABEL: llround_i64_f64_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, d0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.llround.i64.f64(double %x)
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+
+;
+; (L/LL)Round experimental
+;
+
+define float @lround_i32_f16_simd_exp(half %x)  {
+; CHECK-LABEL: lround_i32_f16_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas s0, h0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.experimental.constrained.lround.i32.f16(half %x, metadata !"fpexcept.strict")
+  %sum = bitcast i32 %val to float
+  ret float %sum
+}
+
+define double @lround_i64_f16_simd_exp(half %x)  {
+; CHECK-LABEL: lround_i64_f16_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, h0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.experimental.constrained.lround.i64.f16(half %x, metadata !"fpexcept.strict")
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define double @lround_i64_f32_simd_exp(float %x)  {
+; CHECK-LABEL: lround_i64_f32_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, s0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.experimental.constrained.lround.i64.f32(float %x, metadata !"fpexcept.strict")
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define float @lround_i32_f64_simd_exp(double %x)  {
+; CHECK-LABEL: lround_i32_f64_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas s0, d0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.experimental.constrained.lround.i32.f64(double %x, metadata !"fpexcept.strict")
+  %bc  = bitcast i32 %val to float
+  ret float %bc
+}
+
+define float @lround_i32_f32_simd_exp(float %x)  {
+; CHECK-LABEL: lround_i32_f32_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas s0, s0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.experimental.constrained.lround.i32.f32(float %x, metadata !"fpexcept.strict")
+  %bc  = bitcast i32 %val to float
+  ret float %bc
+}
+
+define double @lround_i64_f64_simd_exp(double %x)  {
+; CHECK-LABEL: lround_i64_f64_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, d0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.experimental.constrained.lround.i64.f64(double %x, metadata !"fpexcept.strict")
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define double @llround_i64_f16_simd_exp(half %x)  {
+; CHECK-LABEL: llround_i64_f16_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, h0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.experimental.constrained.llround.i64.f16(half %x, metadata !"fpexcept.strict")
+  %sum = bitcast i64 %val to double
+  ret double %sum
+}
+
+define double @llround_i64_f32_simd_exp(float %x)  {
+; CHECK-LABEL: llround_i64_f32_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, s0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.experimental.constrained.llround.i64.f32(float %x, metadata !"fpexcept.strict")
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define double @llround_i64_f64_simd_exp(double %x)  {
+; CHECK-LABEL: llround_i64_f64_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    fcvtas d0, d0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.experimental.constrained.llround.i64.f64(double %x, metadata !"fpexcept.strict")
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+;
+; (L/LL)Rint
+;
+
+define float @lrint_i32_f16_simd(half %x)  {
+; CHECK-LABEL: lrint_i32_f16_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx h0, h0
+; CHECK-NEXT:    fcvtzs s0, h0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.lrint.i32.f16(half %x)
+  %sum = bitcast i32 %val to float
+  ret float %sum
+}
+
+define double @lrint_i64_f16_simd(half %x)  {
+; CHECK-LABEL: lrint_i64_f16_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx h0, h0
+; CHECK-NEXT:    fcvtzs d0, h0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.lrint.i53.f16(half %x)
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define double @lrint_i64_f32_simd(float %x)  {
+; CHECK-LABEL: lrint_i64_f32_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx s0, s0
+; CHECK-NEXT:    fcvtzs d0, s0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.lrint.i64.f32(float %x)
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define float @lrint_i32_f64_simd(double %x)  {
+; CHECK-LABEL: lrint_i32_f64_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx d0, d0
+; CHECK-NEXT:    fcvtzs s0, d0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.lrint.i32.f64(double %x)
+  %bc  = bitcast i32 %val to float
+  ret float %bc
+}
+
+define float @lrint_i32_f32_simd(float %x)  {
+; CHECK-LABEL: lrint_i32_f32_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx s0, s0
+; CHECK-NEXT:    fcvtzs s0, s0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.lrint.i32.f32(float %x)
+  %bc  = bitcast i32 %val to float
+  ret float %bc
+}
+
+define double @lrint_i64_f64_simd(double %x)  {
+; CHECK-LABEL: lrint_i64_f64_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx d0, d0
+; CHECK-NEXT:    fcvtzs d0, d0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.lrint.i64.f64(double %x)
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define double @llrint_i64_f16_simd(half %x)  {
+; CHECK-LABEL: llrint_i64_f16_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx h0, h0
+; CHECK-NEXT:    fcvtzs d0, h0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.llrint.i64.f16(half %x)
+  %sum = bitcast i64 %val to double
+  ret double %sum
+}
+
+define double @llrint_i64_f32_simd(float %x)  {
+; CHECK-LABEL: llrint_i64_f32_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx s0, s0
+; CHECK-NEXT:    fcvtzs d0, s0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.llrint.i64.f32(float %x)
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define double @llrint_i64_f64_simd(double %x)  {
+; CHECK-LABEL: llrint_i64_f64_simd:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx d0, d0
+; CHECK-NEXT:    fcvtzs d0, d0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.llrint.i64.f64(double %x)
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+;
+; (L/LL)Rint experimental
+;
+
+define float @lrint_i32_f16_simd_exp(half %x)  {
+; CHECK-LABEL: lrint_i32_f16_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx h0, h0
+; CHECK-NEXT:    fcvtzs s0, h0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.experimental.constrained.lrint.i32.f16(half %x, metadata !"round.tonearest", metadata !"fpexcept.strict")
+  %sum = bitcast i32 %val to float
+  ret float %sum
+}
+
+define double @lrint_i64_f16_simd_exp(half %x)  {
+; CHECK-LABEL: lrint_i64_f16_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx h0, h0
+; CHECK-NEXT:    fcvtzs d0, h0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.experimental.constrained.lrint.i53.f16(half %x, metadata !"round.tonearest", metadata !"fpexcept.strict")
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define double @lrint_i64_f32_simd_exp(float %x)  {
+; CHECK-LABEL: lrint_i64_f32_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx s0, s0
+; CHECK-NEXT:    fcvtzs d0, s0
+; CHECK-NEXT:    ret
+  %val = call i64 @llvm.experimental.constrained.lrint.i64.f32(float %x, metadata !"round.tonearest", metadata !"fpexcept.strict")
+  %bc  = bitcast i64 %val to double
+  ret double %bc
+}
+
+define float @lrint_i32_f64_simd_exp(double %x)  {
+; CHECK-LABEL: lrint_i32_f64_simd_exp:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    frintx d0, d0
+; CHECK-NEXT:    fcvtzs s0, d0
+; CHECK-NEXT:    ret
+  %val = call i32 @llvm.experimental.constrained.lrint.i32.f64(double %x, ...
[truncated]

Copy link
Contributor

@CarolineConcatto CarolineConcatto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you Marian for the patch.
I am still puzzle by the GlobalISel patterns. Can you add examples for all the GlobalISel patterns too?
It looks to me that all the tests are checking only selection DAG, but not your changes in GlobalISel. Is that correct, or I am mistaken?

// For global-isel we can use register classes to determine
// which FCVT instruction to use.
let Predicates = [HasFPRCVT] in {
def : Pat<(i64 (any_lround f32:$Rn)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we not add pattern for the GlobalISel too. It seams to me that all the tests are always falling back to the selection dag.

Copy link
Contributor

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the strictfp status on aarch64 with selectiondag? I thought it was mostly not implemented? I would expect a globaliseil patch for this to not be touching so many patterns, and to not combine changes to strictfp and non-strictfp cases

; (L/LL)Round experimental
;

define float @lround_i32_f16_simd_exp(half %x) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the strictfp and non-strictfp cases should coexist in the same test file

Comment on lines +861 to +865
case TargetOpcode::G_FPTOUI:
case TargetOpcode::G_INTRINSIC_LRINT:
case TargetOpcode::G_INTRINSIC_LLRINT:
case TargetOpcode::G_LROUND:
case TargetOpcode::G_LLROUND: {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of the changes seem to be for the non-strictfp case

; RUN: llc < %s -mtriple aarch64-unknown-unknown -global-isel -global-isel-abort=2 -mattr=+fprcvt,+fullfp16 2>&1 | FileCheck %s --check-prefixes=CHECK,CHECK-GI

; CHECK-GI: warning: Instruction selection used fallback path for lround_i32_f16_simd
; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f16_simd
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need better legalization for G_LROUND? I don't know if anyone has looked at that one yet in GISel.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. There is a todo comment to enable this for f16 types and the other types should probably work as well, but are not mentioned as legal in code ?

// TODO: Libcall support for s128. // TODO: s16 should be legal with full FP16 support. getActionDefinitionsBuilder({G_LROUND, G_LLROUND}) .legalFor({{s64, s32}, {s64, s64}});

But I think this is beyond the scope of this work and should be handled separately.

@Lukacma
Copy link
Contributor Author

Lukacma commented Nov 3, 2025

Thanks for review @arsenm. I have no clue about the status, but based on the test I think it works fine for purpose of this patch as both strict and non-strict IR nodes produce same assembly in SDAG. Sorry if the name is confusing, but this patch adds patterns for both SDAG and GlobalISel. That's why so many patterns are added. Admittedly, strictfp doesn't work in GlobalISel so patterns added purely for GlobalIsel, could be adjusted just for non-strict case, but I don't see any harm in keeping them general. It will save work when strictfp will start working.

ot combine changes to strictfp and non-strictfp cases

I am not sure it would make sense to split the patch into non-strict and strict, as single pattern works for both cases, but I can split the testfile as you suggested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants