Skip to content

FAQ: Why am I getting "NaN detected"in my optimization?

Joris Gillis edited this page Jan 9, 2024 · 8 revisions

What does it mean?

NaN are typically produced by division-by-zero or functions evaluated outside their domain (e.g. square root of a negative number, arcsin of a number greater than 1, logarithm of a negative number, ...).

At iteration 0

Typical error message: Error evaluating Jacobian of equality constraints at user provided starting point. The following is a very common case:

Suppose you have decision variables x,y and a constraint that [x y] should lie on the outside of a circle: sqrt(xˆ2+yˆ2)>=3.

This is not innocent. Consider the partial derivative of this constraint with respect to x: x/sqrt(xˆ2+yˆ2). When a solver, such as IPOPT, requests CasADi to evaluate the constraint Jacobian (nlp_jac_g), this Jacobian will contain a NaN when x=0,y=0. The situation x=0,y=0 very easily occurs since the default initial guess for decision variables is zero.

The fix is easy: initialize the decision variables in a valid point.

The same problem might occur with such a square root in the objective (nlp_grad_f).

See also: video

At subsequent iterations

Sometimes, the initial guess is fine, but NaN warnings pop up during iteration. This is usually harmless: it means the solver plans to take a step that ends up outside of the validatity domain. The solver will simply backtrack (linesearch) and take a step that does not step outside.

Remember, during iterations, optimization solvers may not be respecting constraints or bounds. e.g. constraint sqrt(p)<=2 may still lead to a NaN even when a constraint p>=1 is present.

This may become a problem if the linesearch backtracking maxes out. Perhaps a poor initial guess made the solver end up in a very weird spot in search space; try more thoughtful initial guesses.

Maybe a reformulation can help (e.g. get rid of a square root), or some extra constraints.

After solving

It may occur that the solver converges nicely, only to stop at an error "solver:nlp_grad failed: NaN detected for output grad_gamma_p, at (row 6, col 1)., Failed to calculate multipliers.

This failure occurs in a post-processing calculation of multipliers w.r.t. parameters. Within nlp_grad, gamma is defined as $\gamma = \lambda_f f(x,p) + \lambda_g g(x,p) $.

Output grad_gamma_p computes $\nabla_p \gamma$. The error indicates that this quantity is undefined for the chosen value of $p$.

Here is a minimal example that reproduces the error:

from casadi import *

opti = Opti()

x = opti.variable()
p = opti.parameter()

opti.set_value(p, 0)
opti.minimize(x**2)

opti.subject_to(sin(x-sqrt(p))<=0)
opti.solver("ipopt")

sol = opti.solve()
opti.debug.casadi_solver.get_function('nlp_grad').disp(True)

A quick fix is to disable computation of lam_p by passing an option: opti.solver("ipopt",{"calc_lam_p":False})

Debugging

If you suspect sqrt(xˆ2+yˆ2)to be a problem, you could monitor it: replace it by sqrt(printme(xˆ2+yˆ2,0)) (Matlab) or (x**2+y**2).printme(0) (Python) in the symbolic construction of Optimization problem. When the numerical evaluation starts, you will see floating point results prepended with "0". The integer 0 serves as an identifier to distinguish between multiple instances of printme.

The error message typically tells you row and column of the constraint Jacobian. Rows correspond to constraints (scalarized) and columns to variables (scalarized). If you were using Opti, you m ay use x_describe/g_describe to find out the line number in your code of where the offending variable and constraint are defined.

Clone this wiki locally