Skip to content

Conversation

@spavloff
Copy link
Collaborator

Not-a-Number (NaN) is a special floating-point datum that does not represent any floating-point value. Its purpose is to represent runtime errors detected during floating-point operations. It makes constfolding instructions that produce NaN inappropriate.

NaN is produced by operations that are typically considered wrong, like 0.0 / 0.0 or sqrt(-1.0). It is unlikely that this is the intended behavior. It is more probable that this is an error the frontend could not detect, but which is exposed by deep transformations like LTO. Constant-folding such instructions is not useful.

An instruction that produces NaN can be located in a debugger because it raises an exception. If such an instruction is replaced by a constant, the NaN would silently propagate throughout subsequent expressions, making it more difficult to find the original faulty instruction. Constfolding is undesirable in this case.

Constant folding can be undesirable in the case when the execution environment creates NaNs with non-trivial payloads.

This change prevents the constant folding of instructions that produce NaN, as well as those that propagate NaN from their operands to the result.

Users may still use NaNs for non-computational purposes, such as a sentinel, for example. In such cases the NaN is not intended to be used as an operand in calculations, but only in checks. Users can obtain NaN value using expressions like:

static const double NaN = 0.0 / 0.0;

Such expressions can be evaluated in compile time by the frontend, which also can emit warnings if it detects use of NaN as an operand in calculations. The produced IR can be made free from NaN misuse in such cases.

Not-a-Number (NaN) is a special floating-point datum that does not
represent any floating-point value. Its purpose is to represent runtime
errors detected during floating-point operations. It makes constfolding
instructions that produce NaN inappropriate.

NaN is produced by operations that are typically considered wrong, like
`0.0 / 0.0` or `sqrt(-1.0)`. It is unlikely that this is the intended
behavior. It is more probable that this is an error the frontend could
not detect, but which is exposed by deep transformations like LTO.
Constant-folding such instructions is not useful.

An instruction that produces NaN can be located in a debugger because it
raises an exception. If such an instruction is replaced by a constant,
the NaN would silently propagate throughout subsequent expressions,
making it more difficult to find the original faulty instruction.
Constfolding is undesirable in this case.

Constant folding can be undesirable in the case when the execution
environment creates NaNs with non-trivial payloads.

This change prevents the constant folding of instructions that produce
NaN, as well as those that propagate NaN from their operands to the
result.

Users may still use NaNs for non-computational purposes, such as a
sentinel, for example. In such cases the NaN is not intended to be used
as an operand in calculations, but only in checks. Users can obtain NaN
value using expressions like:

    static const double NaN = 0.0 / 0.0;

Such expressions can be evaluated in compile time by the frontend, which
also can emit warnings if it detects use of NaN as an operand in
calculations. The produced IR can be made free from NaN misuse in such
cases.
@llvmbot llvmbot added llvm:instcombine Covers the InstCombine, InstSimplify and AggressiveInstCombine passes llvm:ir llvm:analysis Includes value tracking, cost tables and constant folding llvm:transforms labels Nov 11, 2025
@llvmbot
Copy link
Member

llvmbot commented Nov 11, 2025

@llvm/pr-subscribers-backend-risc-v
@llvm/pr-subscribers-llvm-analysis

@llvm/pr-subscribers-llvm-ir

Author: Serge Pavlov (spavloff)

Changes

Not-a-Number (NaN) is a special floating-point datum that does not represent any floating-point value. Its purpose is to represent runtime errors detected during floating-point operations. It makes constfolding instructions that produce NaN inappropriate.

NaN is produced by operations that are typically considered wrong, like 0.0 / 0.0 or sqrt(-1.0). It is unlikely that this is the intended behavior. It is more probable that this is an error the frontend could not detect, but which is exposed by deep transformations like LTO. Constant-folding such instructions is not useful.

An instruction that produces NaN can be located in a debugger because it raises an exception. If such an instruction is replaced by a constant, the NaN would silently propagate throughout subsequent expressions, making it more difficult to find the original faulty instruction. Constfolding is undesirable in this case.

Constant folding can be undesirable in the case when the execution environment creates NaNs with non-trivial payloads.

This change prevents the constant folding of instructions that produce NaN, as well as those that propagate NaN from their operands to the result.

Users may still use NaNs for non-computational purposes, such as a sentinel, for example. In such cases the NaN is not intended to be used as an operand in calculations, but only in checks. Users can obtain NaN value using expressions like:

static const double NaN = 0.0 / 0.0;

Such expressions can be evaluated in compile time by the frontend, which also can emit warnings if it detects use of NaN as an operand in calculations. The produced IR can be made free from NaN misuse in such cases.


Patch is 28.32 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/167475.diff

8 Files Affected:

  • (modified) llvm/lib/Analysis/ConstantFolding.cpp (+116-87)
  • (modified) llvm/lib/IR/ConstantFold.cpp (+2)
  • (modified) llvm/test/Transforms/EarlyCSE/atan.ll (+3-2)
  • (modified) llvm/test/Transforms/InstCombine/frexp.ll (+6-3)
  • (modified) llvm/test/Transforms/InstSimplify/constfold-constrained.ll (+14-9)
  • (modified) llvm/test/Transforms/InstSimplify/fptoi-sat.ll (+32-16)
  • (modified) llvm/test/Transforms/InstSimplify/strictfp-fadd.ll (+1-1)
  • (modified) llvm/test/Transforms/SLPVectorizer/X86/split-node-full-match.ll (+6-4)
diff --git a/llvm/lib/Analysis/ConstantFolding.cpp b/llvm/lib/Analysis/ConstantFolding.cpp
index da32542cf7870..4bdca4266308a 100755
--- a/llvm/lib/Analysis/ConstantFolding.cpp
+++ b/llvm/lib/Analysis/ConstantFolding.cpp
@@ -2315,6 +2315,11 @@ static bool mayFoldConstrained(ConstrainedFPIntrinsic *CI,
   if (ORM == RoundingMode::Dynamic)
     return false;
 
+  // If NaN is produced, do not fold such call. In runtime it can be trapped
+  // and properly handled.
+  if (St == APFloat::opStatus::opInvalidOp)
+    return false;
+
   // If FP exceptions are ignored, fold the call, even if such exception is
   // raised.
   if (EB && *EB != fp::ExceptionBehavior::ebStrict)
@@ -2441,17 +2446,22 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
   }
 
   if (auto *Op = dyn_cast<ConstantFP>(Operands[0])) {
+    APFloat U = Op->getValueAPF();
+
     if (IntrinsicID == Intrinsic::convert_to_fp16) {
-      APFloat Val(Op->getValueAPF());
+      APFloat Val(U);
+      if (Val.isNaN())
+        return nullptr;
 
       bool lost = false;
-      Val.convert(APFloat::IEEEhalf(), APFloat::rmNearestTiesToEven, &lost);
+      APFloat::opStatus Status =
+          Val.convert(APFloat::IEEEhalf(), APFloat::rmNearestTiesToEven, &lost);
+      if (Status == APFloat::opInvalidOp)
+        return nullptr;
 
       return ConstantInt::get(Ty->getContext(), Val.bitcastToAPInt());
     }
 
-    APFloat U = Op->getValueAPF();
-
     if (IntrinsicID == Intrinsic::wasm_trunc_signed ||
         IntrinsicID == Intrinsic::wasm_trunc_unsigned) {
       bool Signed = IntrinsicID == Intrinsic::wasm_trunc_signed;
@@ -2473,6 +2483,8 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
 
     if (IntrinsicID == Intrinsic::fptoui_sat ||
         IntrinsicID == Intrinsic::fptosi_sat) {
+      if (U.isNaN())
+        return nullptr;
       // convertToInteger() already has the desired saturation semantics.
       APSInt Int(Ty->getIntegerBitWidth(),
                  IntrinsicID == Intrinsic::fptoui_sat);
@@ -2502,38 +2514,6 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
         !Ty->isIntegerTy())
       return nullptr;
 
-    // Use internal versions of these intrinsics.
-
-    if (IntrinsicID == Intrinsic::nearbyint || IntrinsicID == Intrinsic::rint) {
-      U.roundToIntegral(APFloat::rmNearestTiesToEven);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
-    if (IntrinsicID == Intrinsic::round) {
-      U.roundToIntegral(APFloat::rmNearestTiesToAway);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
-    if (IntrinsicID == Intrinsic::roundeven) {
-      U.roundToIntegral(APFloat::rmNearestTiesToEven);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
-    if (IntrinsicID == Intrinsic::ceil) {
-      U.roundToIntegral(APFloat::rmTowardPositive);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
-    if (IntrinsicID == Intrinsic::floor) {
-      U.roundToIntegral(APFloat::rmTowardNegative);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
-    if (IntrinsicID == Intrinsic::trunc) {
-      U.roundToIntegral(APFloat::rmTowardZero);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
     if (IntrinsicID == Intrinsic::fabs) {
       U.clearSign();
       return ConstantFP::get(Ty->getContext(), U);
@@ -2552,53 +2532,6 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
       return ConstantFP::get(Ty->getContext(), minimum(FractU, AlmostOne));
     }
 
-    // Rounding operations (floor, trunc, ceil, round and nearbyint) do not
-    // raise FP exceptions, unless the argument is signaling NaN.
-
-    std::optional<APFloat::roundingMode> RM;
-    switch (IntrinsicID) {
-    default:
-      break;
-    case Intrinsic::experimental_constrained_nearbyint:
-    case Intrinsic::experimental_constrained_rint: {
-      auto CI = cast<ConstrainedFPIntrinsic>(Call);
-      RM = CI->getRoundingMode();
-      if (!RM || *RM == RoundingMode::Dynamic)
-        return nullptr;
-      break;
-    }
-    case Intrinsic::experimental_constrained_round:
-      RM = APFloat::rmNearestTiesToAway;
-      break;
-    case Intrinsic::experimental_constrained_ceil:
-      RM = APFloat::rmTowardPositive;
-      break;
-    case Intrinsic::experimental_constrained_floor:
-      RM = APFloat::rmTowardNegative;
-      break;
-    case Intrinsic::experimental_constrained_trunc:
-      RM = APFloat::rmTowardZero;
-      break;
-    }
-    if (RM) {
-      auto CI = cast<ConstrainedFPIntrinsic>(Call);
-      if (U.isFinite()) {
-        APFloat::opStatus St = U.roundToIntegral(*RM);
-        if (IntrinsicID == Intrinsic::experimental_constrained_rint &&
-            St == APFloat::opInexact) {
-          std::optional<fp::ExceptionBehavior> EB = CI->getExceptionBehavior();
-          if (EB == fp::ebStrict)
-            return nullptr;
-        }
-      } else if (U.isSignaling()) {
-        std::optional<fp::ExceptionBehavior> EB = CI->getExceptionBehavior();
-        if (EB && *EB != fp::ebIgnore)
-          return nullptr;
-        U = APFloat::getQNaN(U.getSemantics());
-      }
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
     // NVVM float/double to signed/unsigned int32/int64 conversions:
     switch (IntrinsicID) {
     // f2i
@@ -2686,6 +2619,84 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
     }
     }
 
+    // NaN is not a value, it represents an error, don't constfold it.
+    if (U.isNaN())
+      return nullptr;
+
+    // Use internal versions of these intrinsics.
+
+    if (IntrinsicID == Intrinsic::nearbyint || IntrinsicID == Intrinsic::rint) {
+      U.roundToIntegral(APFloat::rmNearestTiesToEven);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    if (IntrinsicID == Intrinsic::round) {
+      U.roundToIntegral(APFloat::rmNearestTiesToAway);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    if (IntrinsicID == Intrinsic::roundeven) {
+      U.roundToIntegral(APFloat::rmNearestTiesToEven);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    if (IntrinsicID == Intrinsic::ceil) {
+      U.roundToIntegral(APFloat::rmTowardPositive);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    if (IntrinsicID == Intrinsic::floor) {
+      U.roundToIntegral(APFloat::rmTowardNegative);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    if (IntrinsicID == Intrinsic::trunc) {
+      U.roundToIntegral(APFloat::rmTowardZero);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    // Rounding operations (floor, trunc, ceil, round and nearbyint) do not
+    // raise FP exceptions, unless the argument is signaling NaN.
+
+    std::optional<APFloat::roundingMode> RM;
+    switch (IntrinsicID) {
+    default:
+      break;
+    case Intrinsic::experimental_constrained_nearbyint:
+    case Intrinsic::experimental_constrained_rint: {
+      auto CI = cast<ConstrainedFPIntrinsic>(Call);
+      RM = CI->getRoundingMode();
+      if (!RM || *RM == RoundingMode::Dynamic)
+        return nullptr;
+      break;
+    }
+    case Intrinsic::experimental_constrained_round:
+      RM = APFloat::rmNearestTiesToAway;
+      break;
+    case Intrinsic::experimental_constrained_ceil:
+      RM = APFloat::rmTowardPositive;
+      break;
+    case Intrinsic::experimental_constrained_floor:
+      RM = APFloat::rmTowardNegative;
+      break;
+    case Intrinsic::experimental_constrained_trunc:
+      RM = APFloat::rmTowardZero;
+      break;
+    }
+    if (RM) {
+      auto CI = cast<ConstrainedFPIntrinsic>(Call);
+      if (U.isFinite()) {
+        APFloat::opStatus St = U.roundToIntegral(*RM);
+        if (IntrinsicID == Intrinsic::experimental_constrained_rint &&
+            St == APFloat::opInexact) {
+          std::optional<fp::ExceptionBehavior> EB = CI->getExceptionBehavior();
+          if (EB == fp::ebStrict)
+            return nullptr;
+        }
+      }
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
     /// We only fold functions with finite arguments. Folding NaN and inf is
     /// likely to be aborted with an exception anyway, and some host libms
     /// have known errors raising exceptions.
@@ -3182,6 +3193,8 @@ static Constant *ConstantFoldLibCall2(StringRef Name, Type *Ty,
 
   const APFloat &Op1V = Op1->getValueAPF();
   const APFloat &Op2V = Op2->getValueAPF();
+  if (Op1V.isNaN() || Op2V.isNaN())
+    return nullptr;
 
   switch (Func) {
   default:
@@ -3196,16 +3209,16 @@ static Constant *ConstantFoldLibCall2(StringRef Name, Type *Ty,
   case LibFunc_fmod:
   case LibFunc_fmodf:
     if (TLI->has(Func)) {
-      APFloat V = Op1->getValueAPF();
-      if (APFloat::opStatus::opOK == V.mod(Op2->getValueAPF()))
+      APFloat V = Op1V;
+      if (APFloat::opStatus::opOK == V.mod(Op2V))
         return ConstantFP::get(Ty->getContext(), V);
     }
     break;
   case LibFunc_remainder:
   case LibFunc_remainderf:
     if (TLI->has(Func)) {
-      APFloat V = Op1->getValueAPF();
-      if (APFloat::opStatus::opOK == V.remainder(Op2->getValueAPF()))
+      APFloat V = Op1V;
+      if (APFloat::opStatus::opOK == V.remainder(Op2V))
         return ConstantFP::get(Ty->getContext(), V);
     }
     break;
@@ -3299,6 +3312,8 @@ static Constant *ConstantFoldIntrinsicCall2(Intrinsic::ID IntrinsicID, Type *Ty,
 
       if (const auto *ConstrIntr =
               dyn_cast_if_present<ConstrainedFPIntrinsic>(Call)) {
+        if (Op1V.isNaN() || Op2V.isNaN())
+          return nullptr;
         RoundingMode RM = getEvaluationRoundingMode(ConstrIntr);
         APFloat Res = Op1V;
         APFloat::opStatus St;
@@ -3519,6 +3534,8 @@ static Constant *ConstantFoldIntrinsicCall2(Intrinsic::ID IntrinsicID, Type *Ty,
       default:
         break;
       case Intrinsic::pow:
+        if (Op1V.isNaN() || Op2V.isNaN())
+          return nullptr;
         return ConstantFoldBinaryFP(pow, Op1V, Op2V, Ty);
       case Intrinsic::amdgcn_fmul_legacy:
         // The legacy behaviour is that multiplying +/- 0.0 by anything, even
@@ -3531,6 +3548,8 @@ static Constant *ConstantFoldIntrinsicCall2(Intrinsic::ID IntrinsicID, Type *Ty,
     } else if (auto *Op2C = dyn_cast<ConstantInt>(Operands[1])) {
       switch (IntrinsicID) {
       case Intrinsic::ldexp: {
+        if (Op1V.isNaN())
+          return nullptr;
         return ConstantFP::get(
             Ty->getContext(),
             scalbn(Op1V, Op2C->getSExtValue(), APFloat::rmNearestTiesToEven));
@@ -3551,6 +3570,8 @@ static Constant *ConstantFoldIntrinsicCall2(Intrinsic::ID IntrinsicID, Type *Ty,
         return ConstantInt::get(Ty, Result);
       }
       case Intrinsic::powi: {
+        if (Op1V.isNaN())
+          return nullptr;
         int Exp = static_cast<int>(Op2C->getSExtValue());
         switch (Ty->getTypeID()) {
         case Type::HalfTyID:
@@ -3893,6 +3914,9 @@ static Constant *ConstantFoldScalarCall3(StringRef Name,
         const APFloat &C3 = Op3->getValueAPF();
 
         if (const auto *ConstrIntr = dyn_cast<ConstrainedFPIntrinsic>(Call)) {
+          if (C1.isNaN() || C2.isNaN() || C3.isNaN())
+            return nullptr;
+
           RoundingMode RM = getEvaluationRoundingMode(ConstrIntr);
           APFloat Res = C1;
           APFloat::opStatus St;
@@ -3924,6 +3948,8 @@ static Constant *ConstantFoldScalarCall3(StringRef Name,
         }
         case Intrinsic::fma:
         case Intrinsic::fmuladd: {
+          if (C1.isNaN() || C2.isNaN() || C3.isNaN())
+            return nullptr;
           APFloat V = C1;
           V.fusedMultiplyAdd(C2, C3, APFloat::rmNearestTiesToEven);
           return ConstantFP::get(Ty->getContext(), V);
@@ -4333,6 +4359,9 @@ ConstantFoldScalarFrexpCall(Constant *Op, Type *IntTy) {
     return {};
 
   const APFloat &U = ConstFP->getValueAPF();
+  if (U.isNaN())
+    return {};
+
   int FrexpExp;
   APFloat FrexpMant = frexp(U, FrexpExp, APFloat::rmNearestTiesToEven);
   Constant *Result0 = ConstantFP::get(ConstFP->getType(), FrexpMant);
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index 6a9ef2efa321f..8783029b120fa 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -845,6 +845,8 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
     if (ConstantFP *CFP2 = dyn_cast<ConstantFP>(C2)) {
       const APFloat &C1V = CFP1->getValueAPF();
       const APFloat &C2V = CFP2->getValueAPF();
+      if (C1V.isNaN() || C2V.isNaN())
+        return nullptr;
       APFloat C3V = C1V;  // copy for modification
       switch (Opcode) {
       default:
diff --git a/llvm/test/Transforms/EarlyCSE/atan.ll b/llvm/test/Transforms/EarlyCSE/atan.ll
index 2b7206c0a6aab..18a7bb5ebff90 100644
--- a/llvm/test/Transforms/EarlyCSE/atan.ll
+++ b/llvm/test/Transforms/EarlyCSE/atan.ll
@@ -136,8 +136,9 @@ define float @callatan2_flush_to_zero() {
 
 define float @callatan2_NaN() {
 ; CHECK-LABEL: @callatan2_NaN(
-; CHECK-NEXT:    ret float 0x7FF8000000000000
-;
+; CHECK-NEXT:    [[CALL:%.*]] = call float @atan2f(float 0x7FF8000000000000, float 0x7FF8000000000000)
+; CHECK-NEXT:    ret float [[CALL]]
+
   %call = call float @atan2f(float 0x7FF8000000000000, float 0x7FF8000000000000)
   ret float %call
 }
diff --git a/llvm/test/Transforms/InstCombine/frexp.ll b/llvm/test/Transforms/InstCombine/frexp.ll
index 6541f0d77a093..6feff85b5d8ba 100644
--- a/llvm/test/Transforms/InstCombine/frexp.ll
+++ b/llvm/test/Transforms/InstCombine/frexp.ll
@@ -199,7 +199,8 @@ define { float, i32 } @frexp_neginf() {
 
 define { float, i32 } @frexp_qnan() {
 ; CHECK-LABEL: define { float, i32 } @frexp_qnan() {
-; CHECK-NEXT:    ret { float, i32 } { float 0x7FF8000000000000, i32 0 }
+; CHECK-NEXT:    [[RET:%.*]] = call { float, i32 } @llvm.frexp.f32.i32(float 0x7FF8000000000000)
+; CHECK-NEXT:    ret { float, i32 } [[RET]]
 ;
   %ret = call { float, i32 } @llvm.frexp.f32.i32(float 0x7FF8000000000000)
   ret { float, i32 } %ret
@@ -207,7 +208,8 @@ define { float, i32 } @frexp_qnan() {
 
 define { float, i32 } @frexp_snan() {
 ; CHECK-LABEL: define { float, i32 } @frexp_snan() {
-; CHECK-NEXT:    ret { float, i32 } { float 0x7FF8000020000000, i32 0 }
+; CHECK-NEXT:    [[RET:%.*]] = call { float, i32 } @llvm.frexp.f32.i32(float 0x7FF0000020000000)
+; CHECK-NEXT:    ret { float, i32 } [[RET]]
 ;
   %ret = call { float, i32 } @llvm.frexp.f32.i32(float bitcast (i32 2139095041 to float))
   ret { float, i32 } %ret
@@ -263,7 +265,8 @@ define { <2 x float>, <2 x i32> } @frexp_splat_4() {
 
 define { <2 x float>, <2 x i32> } @frexp_splat_qnan() {
 ; CHECK-LABEL: define { <2 x float>, <2 x i32> } @frexp_splat_qnan() {
-; CHECK-NEXT:    ret { <2 x float>, <2 x i32> } { <2 x float> splat (float 0x7FF8000000000000), <2 x i32> zeroinitializer }
+; CHECK-NEXT:    [[RET:%.*]] = call { <2 x float>, <2 x i32> } @llvm.frexp.v2f32.v2i32(<2 x float> splat (float 0x7FF8000000000000))
+; CHECK-NEXT:    ret { <2 x float>, <2 x i32> } [[RET]]
 ;
   %ret = call { <2 x float>, <2 x i32> } @llvm.frexp.v2f32.v2i32(<2 x float> <float 0x7FF8000000000000, float 0x7FF8000000000000>)
   ret { <2 x float>, <2 x i32> } %ret
diff --git a/llvm/test/Transforms/InstSimplify/constfold-constrained.ll b/llvm/test/Transforms/InstSimplify/constfold-constrained.ll
index a9ef7f6a765d1..342acf872ae57 100644
--- a/llvm/test/Transforms/InstSimplify/constfold-constrained.ll
+++ b/llvm/test/Transforms/InstSimplify/constfold-constrained.ll
@@ -164,23 +164,24 @@ entry:
   ret double %result
 }
 
-; Verify that trunc(SNAN) is folded to QNAN if the exception behavior mode is 'ignore'.
+; Verify that trunc(SNAN) is not folded if the exception behavior mode is 'ignore'.
 define double @nonfinite_02() #0 {
 ; CHECK-LABEL: @nonfinite_02(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    ret double 0x7FF8000000000000
+; CHECK-NEXT:    [[RESULT:%.*]] = call double @llvm.experimental.constrained.trunc.f64(double 0x7FF4000000000000, metadata !"fpexcept.ignore") #[[ATTR0]]
+; CHECK-NEXT:    ret double [[RESULT]]
 ;
 entry:
   %result = call double @llvm.experimental.constrained.trunc.f64(double 0x7ff4000000000000, metadata !"fpexcept.ignore") #0
   ret double %result
 }
 
-; Verify that trunc(QNAN) is folded even if the exception behavior mode is not 'ignore'.
+; Verify that trunc(QNAN) is not folded if the exception behavior mode is not 'ignore'.
 define double @nonfinite_03() #0 {
 ; CHECK-LABEL: @nonfinite_03(
 ; CHECK-NEXT:  entry:
 ; CHECK-NEXT:    [[RESULT:%.*]] = call double @llvm.experimental.constrained.trunc.f64(double 0x7FF8000000000000, metadata !"fpexcept.strict") #[[ATTR0]]
-; CHECK-NEXT:    ret double 0x7FF8000000000000
+; CHECK-NEXT:    ret double [[RESULT]]
 ;
 entry:
   %result = call double @llvm.experimental.constrained.trunc.f64(double 0x7ff8000000000000, metadata !"fpexcept.strict") #0
@@ -451,7 +452,8 @@ entry:
 define i1 @cmp_eq_03() #0 {
 ; CHECK-LABEL: @cmp_eq_03(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    ret i1 false
+; CHECK-NEXT:    [[RESULT:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f64(double 2.000000e+00, double 0x7FF8000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #[[ATTR0]]
+; CHECK-NEXT:    ret i1 [[RESULT]]
 ;
 entry:
   %result = call i1 @llvm.experimental.constrained.fcmp.f64(double 2.0, double 0x7ff8000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #0
@@ -461,7 +463,8 @@ entry:
 define i1 @cmp_eq_04() #0 {
 ; CHECK-LABEL: @cmp_eq_04(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    ret i1 false
+; CHECK-NEXT:    [[RESULT:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f64(double 2.000000e+00, double 0x7FF4000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #[[ATTR0]]
+; CHECK-NEXT:    ret i1 [[RESULT]]
 ;
 entry:
   %result = call i1 @llvm.experimental.constrained.fcmp.f64(double 2.0, double 0x7ff4000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #0
@@ -471,7 +474,8 @@ entry:
 define i1 @cmp_eq_05() #0 {
 ; CHECK-LABEL: @cmp_eq_05(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    ret i1 false
+; CHECK-NEXT:    [[RESULT:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f64(double 2.000000e+00, double 0x7FF8000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #[[ATTR0]]
+; CHECK-NEXT:    ret i1 [[RESULT]]
 ;
 entry:
   %result = call i1 @llvm.experimental.constrained.fcmps.f64(double 2.0, double 0x7ff8000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #0
@@ -481,7 +485,8 @@ entry:
 define i1 @cmp_eq_06() #0 {
 ; CHECK-LABEL: @cmp_eq_06(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    ret i1 false
+; CHECK-NEXT:    [[RESULT:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f64(double 2.000000e+00, double 0x7FF4000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #[[ATTR0]]
+; CHECK-NEXT:    ret i1 [[RESULT]]
 ;
 entry:
   %result = call i1 @llvm.experimental.constrained.fcmps.f64(double 2.0, double 0x7ff4000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #0
@@ -516,7 +521,7 @@ define i1 @cmp_eq_nan_03() #0 {
 ; CHECK-LABEL: @cmp_eq_nan_03(
 ; CHECK-NEXT:  entry:
 ; CHECK-NEXT:    [[RESULT:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f64(double 0x7FF8000000000000, double 1.000000e+00, metadata !"oeq", metadata !"fpexcept.strict") #[[ATTR0]]
-; CHECK-NEXT:    ret i1 false
+; CHECK-NEXT:    ret i1 [[RESULT]]
 ;
 entry:
   %result = call i1 @llvm.experimental.constrained.fcmp.f64(double 0x7ff8000000000000, double 1.0, metadata !"oeq", metadata !"fpexcept.strict") #0
diff --git a/llvm/test/Transforms/InstSimplify/fptoi-sat.ll b/llvm/test/Transforms/InstSimplify/fptoi-sat.ll
index ef15b1ee7a33a..3f7b982baf935 100644
--- a/llvm/test/Transforms/InstSimplify/fptoi-sat.ll
+++ b/llvm/test/Transforms/InstSimplify/fptoi-sat.ll
@@ -139,7 +139,8 @@ define i32 @fptosi_f64_to_i32_neg_inf() {
 
 define i32 @fptosi_f64_to_i32_nan1() {
 ; CHECK-LABEL: @fptosi_f64_to_i32_nan1(
-; CHECK-NEXT:    ret i32 0
+; CHECK-NEXT:    [[R:%.*]] = call i32 @llvm.fptosi.sat.i32.f64(double 0x7FF8000000000000)
+; CHECK-NEXT:    ret i32 [[R]]
 ;
   %r = call i32 @llvm.fptosi.sat.i32.f64(double 0x7ff8000000000000)
   ret i32 %r
@@ -147,7 +148,8 @@ define i32 @fptosi_f64_to_i32_nan1() {
 
 define i32 @fptosi_f64_to_i32_nan2() {
 ; CHECK-LABEL: @fptosi_f64_to_i32_nan2(
-; CHECK-NEXT:    ret i32 0
+; CHECK-NEXT:    [[R:%.*]] = call i32 @llvm.fptosi.sat.i32.f64(double 0x7FF4000000000000)
+; CHECK-NEXT:    ret i32 [[R]]
 ;
   %r = call i32 @llvm.fptosi.sat.i32.f64(double 0x7ff4000000000000)
   ret i32 %r
@@ -155,7 +157,8 @@ defi...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Nov 11, 2025

@llvm/pr-subscribers-llvm-transforms

Author: Serge Pavlov (spavloff)

Changes

Not-a-Number (NaN) is a special floating-point datum that does not represent any floating-point value. Its purpose is to represent runtime errors detected during floating-point operations. It makes constfolding instructions that produce NaN inappropriate.

NaN is produced by operations that are typically considered wrong, like 0.0 / 0.0 or sqrt(-1.0). It is unlikely that this is the intended behavior. It is more probable that this is an error the frontend could not detect, but which is exposed by deep transformations like LTO. Constant-folding such instructions is not useful.

An instruction that produces NaN can be located in a debugger because it raises an exception. If such an instruction is replaced by a constant, the NaN would silently propagate throughout subsequent expressions, making it more difficult to find the original faulty instruction. Constfolding is undesirable in this case.

Constant folding can be undesirable in the case when the execution environment creates NaNs with non-trivial payloads.

This change prevents the constant folding of instructions that produce NaN, as well as those that propagate NaN from their operands to the result.

Users may still use NaNs for non-computational purposes, such as a sentinel, for example. In such cases the NaN is not intended to be used as an operand in calculations, but only in checks. Users can obtain NaN value using expressions like:

static const double NaN = 0.0 / 0.0;

Such expressions can be evaluated in compile time by the frontend, which also can emit warnings if it detects use of NaN as an operand in calculations. The produced IR can be made free from NaN misuse in such cases.


Patch is 28.32 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/167475.diff

8 Files Affected:

  • (modified) llvm/lib/Analysis/ConstantFolding.cpp (+116-87)
  • (modified) llvm/lib/IR/ConstantFold.cpp (+2)
  • (modified) llvm/test/Transforms/EarlyCSE/atan.ll (+3-2)
  • (modified) llvm/test/Transforms/InstCombine/frexp.ll (+6-3)
  • (modified) llvm/test/Transforms/InstSimplify/constfold-constrained.ll (+14-9)
  • (modified) llvm/test/Transforms/InstSimplify/fptoi-sat.ll (+32-16)
  • (modified) llvm/test/Transforms/InstSimplify/strictfp-fadd.ll (+1-1)
  • (modified) llvm/test/Transforms/SLPVectorizer/X86/split-node-full-match.ll (+6-4)
diff --git a/llvm/lib/Analysis/ConstantFolding.cpp b/llvm/lib/Analysis/ConstantFolding.cpp
index da32542cf7870..4bdca4266308a 100755
--- a/llvm/lib/Analysis/ConstantFolding.cpp
+++ b/llvm/lib/Analysis/ConstantFolding.cpp
@@ -2315,6 +2315,11 @@ static bool mayFoldConstrained(ConstrainedFPIntrinsic *CI,
   if (ORM == RoundingMode::Dynamic)
     return false;
 
+  // If NaN is produced, do not fold such call. In runtime it can be trapped
+  // and properly handled.
+  if (St == APFloat::opStatus::opInvalidOp)
+    return false;
+
   // If FP exceptions are ignored, fold the call, even if such exception is
   // raised.
   if (EB && *EB != fp::ExceptionBehavior::ebStrict)
@@ -2441,17 +2446,22 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
   }
 
   if (auto *Op = dyn_cast<ConstantFP>(Operands[0])) {
+    APFloat U = Op->getValueAPF();
+
     if (IntrinsicID == Intrinsic::convert_to_fp16) {
-      APFloat Val(Op->getValueAPF());
+      APFloat Val(U);
+      if (Val.isNaN())
+        return nullptr;
 
       bool lost = false;
-      Val.convert(APFloat::IEEEhalf(), APFloat::rmNearestTiesToEven, &lost);
+      APFloat::opStatus Status =
+          Val.convert(APFloat::IEEEhalf(), APFloat::rmNearestTiesToEven, &lost);
+      if (Status == APFloat::opInvalidOp)
+        return nullptr;
 
       return ConstantInt::get(Ty->getContext(), Val.bitcastToAPInt());
     }
 
-    APFloat U = Op->getValueAPF();
-
     if (IntrinsicID == Intrinsic::wasm_trunc_signed ||
         IntrinsicID == Intrinsic::wasm_trunc_unsigned) {
       bool Signed = IntrinsicID == Intrinsic::wasm_trunc_signed;
@@ -2473,6 +2483,8 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
 
     if (IntrinsicID == Intrinsic::fptoui_sat ||
         IntrinsicID == Intrinsic::fptosi_sat) {
+      if (U.isNaN())
+        return nullptr;
       // convertToInteger() already has the desired saturation semantics.
       APSInt Int(Ty->getIntegerBitWidth(),
                  IntrinsicID == Intrinsic::fptoui_sat);
@@ -2502,38 +2514,6 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
         !Ty->isIntegerTy())
       return nullptr;
 
-    // Use internal versions of these intrinsics.
-
-    if (IntrinsicID == Intrinsic::nearbyint || IntrinsicID == Intrinsic::rint) {
-      U.roundToIntegral(APFloat::rmNearestTiesToEven);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
-    if (IntrinsicID == Intrinsic::round) {
-      U.roundToIntegral(APFloat::rmNearestTiesToAway);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
-    if (IntrinsicID == Intrinsic::roundeven) {
-      U.roundToIntegral(APFloat::rmNearestTiesToEven);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
-    if (IntrinsicID == Intrinsic::ceil) {
-      U.roundToIntegral(APFloat::rmTowardPositive);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
-    if (IntrinsicID == Intrinsic::floor) {
-      U.roundToIntegral(APFloat::rmTowardNegative);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
-    if (IntrinsicID == Intrinsic::trunc) {
-      U.roundToIntegral(APFloat::rmTowardZero);
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
     if (IntrinsicID == Intrinsic::fabs) {
       U.clearSign();
       return ConstantFP::get(Ty->getContext(), U);
@@ -2552,53 +2532,6 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
       return ConstantFP::get(Ty->getContext(), minimum(FractU, AlmostOne));
     }
 
-    // Rounding operations (floor, trunc, ceil, round and nearbyint) do not
-    // raise FP exceptions, unless the argument is signaling NaN.
-
-    std::optional<APFloat::roundingMode> RM;
-    switch (IntrinsicID) {
-    default:
-      break;
-    case Intrinsic::experimental_constrained_nearbyint:
-    case Intrinsic::experimental_constrained_rint: {
-      auto CI = cast<ConstrainedFPIntrinsic>(Call);
-      RM = CI->getRoundingMode();
-      if (!RM || *RM == RoundingMode::Dynamic)
-        return nullptr;
-      break;
-    }
-    case Intrinsic::experimental_constrained_round:
-      RM = APFloat::rmNearestTiesToAway;
-      break;
-    case Intrinsic::experimental_constrained_ceil:
-      RM = APFloat::rmTowardPositive;
-      break;
-    case Intrinsic::experimental_constrained_floor:
-      RM = APFloat::rmTowardNegative;
-      break;
-    case Intrinsic::experimental_constrained_trunc:
-      RM = APFloat::rmTowardZero;
-      break;
-    }
-    if (RM) {
-      auto CI = cast<ConstrainedFPIntrinsic>(Call);
-      if (U.isFinite()) {
-        APFloat::opStatus St = U.roundToIntegral(*RM);
-        if (IntrinsicID == Intrinsic::experimental_constrained_rint &&
-            St == APFloat::opInexact) {
-          std::optional<fp::ExceptionBehavior> EB = CI->getExceptionBehavior();
-          if (EB == fp::ebStrict)
-            return nullptr;
-        }
-      } else if (U.isSignaling()) {
-        std::optional<fp::ExceptionBehavior> EB = CI->getExceptionBehavior();
-        if (EB && *EB != fp::ebIgnore)
-          return nullptr;
-        U = APFloat::getQNaN(U.getSemantics());
-      }
-      return ConstantFP::get(Ty->getContext(), U);
-    }
-
     // NVVM float/double to signed/unsigned int32/int64 conversions:
     switch (IntrinsicID) {
     // f2i
@@ -2686,6 +2619,84 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
     }
     }
 
+    // NaN is not a value, it represents an error, don't constfold it.
+    if (U.isNaN())
+      return nullptr;
+
+    // Use internal versions of these intrinsics.
+
+    if (IntrinsicID == Intrinsic::nearbyint || IntrinsicID == Intrinsic::rint) {
+      U.roundToIntegral(APFloat::rmNearestTiesToEven);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    if (IntrinsicID == Intrinsic::round) {
+      U.roundToIntegral(APFloat::rmNearestTiesToAway);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    if (IntrinsicID == Intrinsic::roundeven) {
+      U.roundToIntegral(APFloat::rmNearestTiesToEven);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    if (IntrinsicID == Intrinsic::ceil) {
+      U.roundToIntegral(APFloat::rmTowardPositive);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    if (IntrinsicID == Intrinsic::floor) {
+      U.roundToIntegral(APFloat::rmTowardNegative);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    if (IntrinsicID == Intrinsic::trunc) {
+      U.roundToIntegral(APFloat::rmTowardZero);
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
+    // Rounding operations (floor, trunc, ceil, round and nearbyint) do not
+    // raise FP exceptions, unless the argument is signaling NaN.
+
+    std::optional<APFloat::roundingMode> RM;
+    switch (IntrinsicID) {
+    default:
+      break;
+    case Intrinsic::experimental_constrained_nearbyint:
+    case Intrinsic::experimental_constrained_rint: {
+      auto CI = cast<ConstrainedFPIntrinsic>(Call);
+      RM = CI->getRoundingMode();
+      if (!RM || *RM == RoundingMode::Dynamic)
+        return nullptr;
+      break;
+    }
+    case Intrinsic::experimental_constrained_round:
+      RM = APFloat::rmNearestTiesToAway;
+      break;
+    case Intrinsic::experimental_constrained_ceil:
+      RM = APFloat::rmTowardPositive;
+      break;
+    case Intrinsic::experimental_constrained_floor:
+      RM = APFloat::rmTowardNegative;
+      break;
+    case Intrinsic::experimental_constrained_trunc:
+      RM = APFloat::rmTowardZero;
+      break;
+    }
+    if (RM) {
+      auto CI = cast<ConstrainedFPIntrinsic>(Call);
+      if (U.isFinite()) {
+        APFloat::opStatus St = U.roundToIntegral(*RM);
+        if (IntrinsicID == Intrinsic::experimental_constrained_rint &&
+            St == APFloat::opInexact) {
+          std::optional<fp::ExceptionBehavior> EB = CI->getExceptionBehavior();
+          if (EB == fp::ebStrict)
+            return nullptr;
+        }
+      }
+      return ConstantFP::get(Ty->getContext(), U);
+    }
+
     /// We only fold functions with finite arguments. Folding NaN and inf is
     /// likely to be aborted with an exception anyway, and some host libms
     /// have known errors raising exceptions.
@@ -3182,6 +3193,8 @@ static Constant *ConstantFoldLibCall2(StringRef Name, Type *Ty,
 
   const APFloat &Op1V = Op1->getValueAPF();
   const APFloat &Op2V = Op2->getValueAPF();
+  if (Op1V.isNaN() || Op2V.isNaN())
+    return nullptr;
 
   switch (Func) {
   default:
@@ -3196,16 +3209,16 @@ static Constant *ConstantFoldLibCall2(StringRef Name, Type *Ty,
   case LibFunc_fmod:
   case LibFunc_fmodf:
     if (TLI->has(Func)) {
-      APFloat V = Op1->getValueAPF();
-      if (APFloat::opStatus::opOK == V.mod(Op2->getValueAPF()))
+      APFloat V = Op1V;
+      if (APFloat::opStatus::opOK == V.mod(Op2V))
         return ConstantFP::get(Ty->getContext(), V);
     }
     break;
   case LibFunc_remainder:
   case LibFunc_remainderf:
     if (TLI->has(Func)) {
-      APFloat V = Op1->getValueAPF();
-      if (APFloat::opStatus::opOK == V.remainder(Op2->getValueAPF()))
+      APFloat V = Op1V;
+      if (APFloat::opStatus::opOK == V.remainder(Op2V))
         return ConstantFP::get(Ty->getContext(), V);
     }
     break;
@@ -3299,6 +3312,8 @@ static Constant *ConstantFoldIntrinsicCall2(Intrinsic::ID IntrinsicID, Type *Ty,
 
       if (const auto *ConstrIntr =
               dyn_cast_if_present<ConstrainedFPIntrinsic>(Call)) {
+        if (Op1V.isNaN() || Op2V.isNaN())
+          return nullptr;
         RoundingMode RM = getEvaluationRoundingMode(ConstrIntr);
         APFloat Res = Op1V;
         APFloat::opStatus St;
@@ -3519,6 +3534,8 @@ static Constant *ConstantFoldIntrinsicCall2(Intrinsic::ID IntrinsicID, Type *Ty,
       default:
         break;
       case Intrinsic::pow:
+        if (Op1V.isNaN() || Op2V.isNaN())
+          return nullptr;
         return ConstantFoldBinaryFP(pow, Op1V, Op2V, Ty);
       case Intrinsic::amdgcn_fmul_legacy:
         // The legacy behaviour is that multiplying +/- 0.0 by anything, even
@@ -3531,6 +3548,8 @@ static Constant *ConstantFoldIntrinsicCall2(Intrinsic::ID IntrinsicID, Type *Ty,
     } else if (auto *Op2C = dyn_cast<ConstantInt>(Operands[1])) {
       switch (IntrinsicID) {
       case Intrinsic::ldexp: {
+        if (Op1V.isNaN())
+          return nullptr;
         return ConstantFP::get(
             Ty->getContext(),
             scalbn(Op1V, Op2C->getSExtValue(), APFloat::rmNearestTiesToEven));
@@ -3551,6 +3570,8 @@ static Constant *ConstantFoldIntrinsicCall2(Intrinsic::ID IntrinsicID, Type *Ty,
         return ConstantInt::get(Ty, Result);
       }
       case Intrinsic::powi: {
+        if (Op1V.isNaN())
+          return nullptr;
         int Exp = static_cast<int>(Op2C->getSExtValue());
         switch (Ty->getTypeID()) {
         case Type::HalfTyID:
@@ -3893,6 +3914,9 @@ static Constant *ConstantFoldScalarCall3(StringRef Name,
         const APFloat &C3 = Op3->getValueAPF();
 
         if (const auto *ConstrIntr = dyn_cast<ConstrainedFPIntrinsic>(Call)) {
+          if (C1.isNaN() || C2.isNaN() || C3.isNaN())
+            return nullptr;
+
           RoundingMode RM = getEvaluationRoundingMode(ConstrIntr);
           APFloat Res = C1;
           APFloat::opStatus St;
@@ -3924,6 +3948,8 @@ static Constant *ConstantFoldScalarCall3(StringRef Name,
         }
         case Intrinsic::fma:
         case Intrinsic::fmuladd: {
+          if (C1.isNaN() || C2.isNaN() || C3.isNaN())
+            return nullptr;
           APFloat V = C1;
           V.fusedMultiplyAdd(C2, C3, APFloat::rmNearestTiesToEven);
           return ConstantFP::get(Ty->getContext(), V);
@@ -4333,6 +4359,9 @@ ConstantFoldScalarFrexpCall(Constant *Op, Type *IntTy) {
     return {};
 
   const APFloat &U = ConstFP->getValueAPF();
+  if (U.isNaN())
+    return {};
+
   int FrexpExp;
   APFloat FrexpMant = frexp(U, FrexpExp, APFloat::rmNearestTiesToEven);
   Constant *Result0 = ConstantFP::get(ConstFP->getType(), FrexpMant);
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index 6a9ef2efa321f..8783029b120fa 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -845,6 +845,8 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
     if (ConstantFP *CFP2 = dyn_cast<ConstantFP>(C2)) {
       const APFloat &C1V = CFP1->getValueAPF();
       const APFloat &C2V = CFP2->getValueAPF();
+      if (C1V.isNaN() || C2V.isNaN())
+        return nullptr;
       APFloat C3V = C1V;  // copy for modification
       switch (Opcode) {
       default:
diff --git a/llvm/test/Transforms/EarlyCSE/atan.ll b/llvm/test/Transforms/EarlyCSE/atan.ll
index 2b7206c0a6aab..18a7bb5ebff90 100644
--- a/llvm/test/Transforms/EarlyCSE/atan.ll
+++ b/llvm/test/Transforms/EarlyCSE/atan.ll
@@ -136,8 +136,9 @@ define float @callatan2_flush_to_zero() {
 
 define float @callatan2_NaN() {
 ; CHECK-LABEL: @callatan2_NaN(
-; CHECK-NEXT:    ret float 0x7FF8000000000000
-;
+; CHECK-NEXT:    [[CALL:%.*]] = call float @atan2f(float 0x7FF8000000000000, float 0x7FF8000000000000)
+; CHECK-NEXT:    ret float [[CALL]]
+
   %call = call float @atan2f(float 0x7FF8000000000000, float 0x7FF8000000000000)
   ret float %call
 }
diff --git a/llvm/test/Transforms/InstCombine/frexp.ll b/llvm/test/Transforms/InstCombine/frexp.ll
index 6541f0d77a093..6feff85b5d8ba 100644
--- a/llvm/test/Transforms/InstCombine/frexp.ll
+++ b/llvm/test/Transforms/InstCombine/frexp.ll
@@ -199,7 +199,8 @@ define { float, i32 } @frexp_neginf() {
 
 define { float, i32 } @frexp_qnan() {
 ; CHECK-LABEL: define { float, i32 } @frexp_qnan() {
-; CHECK-NEXT:    ret { float, i32 } { float 0x7FF8000000000000, i32 0 }
+; CHECK-NEXT:    [[RET:%.*]] = call { float, i32 } @llvm.frexp.f32.i32(float 0x7FF8000000000000)
+; CHECK-NEXT:    ret { float, i32 } [[RET]]
 ;
   %ret = call { float, i32 } @llvm.frexp.f32.i32(float 0x7FF8000000000000)
   ret { float, i32 } %ret
@@ -207,7 +208,8 @@ define { float, i32 } @frexp_qnan() {
 
 define { float, i32 } @frexp_snan() {
 ; CHECK-LABEL: define { float, i32 } @frexp_snan() {
-; CHECK-NEXT:    ret { float, i32 } { float 0x7FF8000020000000, i32 0 }
+; CHECK-NEXT:    [[RET:%.*]] = call { float, i32 } @llvm.frexp.f32.i32(float 0x7FF0000020000000)
+; CHECK-NEXT:    ret { float, i32 } [[RET]]
 ;
   %ret = call { float, i32 } @llvm.frexp.f32.i32(float bitcast (i32 2139095041 to float))
   ret { float, i32 } %ret
@@ -263,7 +265,8 @@ define { <2 x float>, <2 x i32> } @frexp_splat_4() {
 
 define { <2 x float>, <2 x i32> } @frexp_splat_qnan() {
 ; CHECK-LABEL: define { <2 x float>, <2 x i32> } @frexp_splat_qnan() {
-; CHECK-NEXT:    ret { <2 x float>, <2 x i32> } { <2 x float> splat (float 0x7FF8000000000000), <2 x i32> zeroinitializer }
+; CHECK-NEXT:    [[RET:%.*]] = call { <2 x float>, <2 x i32> } @llvm.frexp.v2f32.v2i32(<2 x float> splat (float 0x7FF8000000000000))
+; CHECK-NEXT:    ret { <2 x float>, <2 x i32> } [[RET]]
 ;
   %ret = call { <2 x float>, <2 x i32> } @llvm.frexp.v2f32.v2i32(<2 x float> <float 0x7FF8000000000000, float 0x7FF8000000000000>)
   ret { <2 x float>, <2 x i32> } %ret
diff --git a/llvm/test/Transforms/InstSimplify/constfold-constrained.ll b/llvm/test/Transforms/InstSimplify/constfold-constrained.ll
index a9ef7f6a765d1..342acf872ae57 100644
--- a/llvm/test/Transforms/InstSimplify/constfold-constrained.ll
+++ b/llvm/test/Transforms/InstSimplify/constfold-constrained.ll
@@ -164,23 +164,24 @@ entry:
   ret double %result
 }
 
-; Verify that trunc(SNAN) is folded to QNAN if the exception behavior mode is 'ignore'.
+; Verify that trunc(SNAN) is not folded if the exception behavior mode is 'ignore'.
 define double @nonfinite_02() #0 {
 ; CHECK-LABEL: @nonfinite_02(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    ret double 0x7FF8000000000000
+; CHECK-NEXT:    [[RESULT:%.*]] = call double @llvm.experimental.constrained.trunc.f64(double 0x7FF4000000000000, metadata !"fpexcept.ignore") #[[ATTR0]]
+; CHECK-NEXT:    ret double [[RESULT]]
 ;
 entry:
   %result = call double @llvm.experimental.constrained.trunc.f64(double 0x7ff4000000000000, metadata !"fpexcept.ignore") #0
   ret double %result
 }
 
-; Verify that trunc(QNAN) is folded even if the exception behavior mode is not 'ignore'.
+; Verify that trunc(QNAN) is not folded if the exception behavior mode is not 'ignore'.
 define double @nonfinite_03() #0 {
 ; CHECK-LABEL: @nonfinite_03(
 ; CHECK-NEXT:  entry:
 ; CHECK-NEXT:    [[RESULT:%.*]] = call double @llvm.experimental.constrained.trunc.f64(double 0x7FF8000000000000, metadata !"fpexcept.strict") #[[ATTR0]]
-; CHECK-NEXT:    ret double 0x7FF8000000000000
+; CHECK-NEXT:    ret double [[RESULT]]
 ;
 entry:
   %result = call double @llvm.experimental.constrained.trunc.f64(double 0x7ff8000000000000, metadata !"fpexcept.strict") #0
@@ -451,7 +452,8 @@ entry:
 define i1 @cmp_eq_03() #0 {
 ; CHECK-LABEL: @cmp_eq_03(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    ret i1 false
+; CHECK-NEXT:    [[RESULT:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f64(double 2.000000e+00, double 0x7FF8000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #[[ATTR0]]
+; CHECK-NEXT:    ret i1 [[RESULT]]
 ;
 entry:
   %result = call i1 @llvm.experimental.constrained.fcmp.f64(double 2.0, double 0x7ff8000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #0
@@ -461,7 +463,8 @@ entry:
 define i1 @cmp_eq_04() #0 {
 ; CHECK-LABEL: @cmp_eq_04(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    ret i1 false
+; CHECK-NEXT:    [[RESULT:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f64(double 2.000000e+00, double 0x7FF4000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #[[ATTR0]]
+; CHECK-NEXT:    ret i1 [[RESULT]]
 ;
 entry:
   %result = call i1 @llvm.experimental.constrained.fcmp.f64(double 2.0, double 0x7ff4000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #0
@@ -471,7 +474,8 @@ entry:
 define i1 @cmp_eq_05() #0 {
 ; CHECK-LABEL: @cmp_eq_05(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    ret i1 false
+; CHECK-NEXT:    [[RESULT:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f64(double 2.000000e+00, double 0x7FF8000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #[[ATTR0]]
+; CHECK-NEXT:    ret i1 [[RESULT]]
 ;
 entry:
   %result = call i1 @llvm.experimental.constrained.fcmps.f64(double 2.0, double 0x7ff8000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #0
@@ -481,7 +485,8 @@ entry:
 define i1 @cmp_eq_06() #0 {
 ; CHECK-LABEL: @cmp_eq_06(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    ret i1 false
+; CHECK-NEXT:    [[RESULT:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f64(double 2.000000e+00, double 0x7FF4000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #[[ATTR0]]
+; CHECK-NEXT:    ret i1 [[RESULT]]
 ;
 entry:
   %result = call i1 @llvm.experimental.constrained.fcmps.f64(double 2.0, double 0x7ff4000000000000, metadata !"oeq", metadata !"fpexcept.ignore") #0
@@ -516,7 +521,7 @@ define i1 @cmp_eq_nan_03() #0 {
 ; CHECK-LABEL: @cmp_eq_nan_03(
 ; CHECK-NEXT:  entry:
 ; CHECK-NEXT:    [[RESULT:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f64(double 0x7FF8000000000000, double 1.000000e+00, metadata !"oeq", metadata !"fpexcept.strict") #[[ATTR0]]
-; CHECK-NEXT:    ret i1 false
+; CHECK-NEXT:    ret i1 [[RESULT]]
 ;
 entry:
   %result = call i1 @llvm.experimental.constrained.fcmp.f64(double 0x7ff8000000000000, double 1.0, metadata !"oeq", metadata !"fpexcept.strict") #0
diff --git a/llvm/test/Transforms/InstSimplify/fptoi-sat.ll b/llvm/test/Transforms/InstSimplify/fptoi-sat.ll
index ef15b1ee7a33a..3f7b982baf935 100644
--- a/llvm/test/Transforms/InstSimplify/fptoi-sat.ll
+++ b/llvm/test/Transforms/InstSimplify/fptoi-sat.ll
@@ -139,7 +139,8 @@ define i32 @fptosi_f64_to_i32_neg_inf() {
 
 define i32 @fptosi_f64_to_i32_nan1() {
 ; CHECK-LABEL: @fptosi_f64_to_i32_nan1(
-; CHECK-NEXT:    ret i32 0
+; CHECK-NEXT:    [[R:%.*]] = call i32 @llvm.fptosi.sat.i32.f64(double 0x7FF8000000000000)
+; CHECK-NEXT:    ret i32 [[R]]
 ;
   %r = call i32 @llvm.fptosi.sat.i32.f64(double 0x7ff8000000000000)
   ret i32 %r
@@ -147,7 +148,8 @@ define i32 @fptosi_f64_to_i32_nan1() {
 
 define i32 @fptosi_f64_to_i32_nan2() {
 ; CHECK-LABEL: @fptosi_f64_to_i32_nan2(
-; CHECK-NEXT:    ret i32 0
+; CHECK-NEXT:    [[R:%.*]] = call i32 @llvm.fptosi.sat.i32.f64(double 0x7FF4000000000000)
+; CHECK-NEXT:    ret i32 [[R]]
 ;
   %r = call i32 @llvm.fptosi.sat.i32.f64(double 0x7ff4000000000000)
   ret i32 %r
@@ -155,7 +157,8 @@ defi...
[truncated]

@arsenm
Copy link
Contributor

arsenm commented Nov 11, 2025

This should only be blocked in strictfp cases that could observe exceptions

@spavloff
Copy link
Collaborator Author

The motivation for this patch is that such folding is useless and brings inconvenience. Are there any benefits of making the folding?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend:RISC-V llvm:analysis Includes value tracking, cost tables and constant folding llvm:instcombine Covers the InstCombine, InstSimplify and AggressiveInstCombine passes llvm:ir llvm:transforms

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants