Skip to content

Changes v3.2.0 to v3.3.0

Joel Andersson edited this page Nov 15, 2017 · 17 revisions

New and improved features

Support for finite differences

CasADi is now able to calculate derivatives using finite differences approximations. To enable this feature, set the "enable_fd" option to true for a function object. If the function object has built-in derivative support, you can disable it by setting the options enable_forward, enable_reverse and enable_jacobian to false.

The default algorithm is a central difference scheme with automatic step-size selection based on estimates of truncation errors and roundoff errors. You can change this to a (cheaper, but less accurate) one-sided scheme by setting fd_method to forward or backward. There is also an experimental discontinuity avoiding scheme (suitable if the function is differentiated near nonsmooth points that can be enable by setting fd_method to smoothing.

New linear solvers with support for C code generation

Two sparse direct linear solvers have been added to CasADi's runtime core: One based on an up-looking QR factorization, calculated using Householder reflections, and one sparse direct LDL method (square-root free variant of Cholesky). These solvers are available for both SX and MX, for MX as the linear solver plugins "qr" and "ldl", for MX as the methods "SX::qr_sparse" and "SX::ldl". They also support for C code generation (with the exception of LDL in MX).

Faster symbolic processing of MX graphs

A speed bottleneck, related to the topological sorting of large MX graphs has been identified and resolved. The complexity of the sorting algorithms is now linear in all cases.

Other improvements

  • A\y and y'/A now work in Matlab/Octave
  • Matrix power works
  • First major release with Opti
  • shell compiler now works on Windows, allowing to do jit using Visual Studio
  • Added introspection methods instruction_* that work for SX/MX Functions. See accessing_mx_algorithm example to see how you can walk an MX graph.
  • Experimental feature to export SX/MX functions to pure-Matlab code.
  • DM::rand creates a matrix with random numbers. DM::rng controls the seeding of the random number generator.

Distribution/build system

  • Python interface no longer searches for/links to Python libraries (on Linux, OSX)
  • Python interface no longer depends on Numpy at compile-time; CasADi works for any numpy version now
  • Python binaries and wheels have come a step closer to true manylinux. CasADi should now run on CentOS 5.

API changes

Refactored printing of function objects

The default printout of Function instances is now shorter and consistent across different Function derived classes (SX/MX functions, NLP solvers, integrators, etc.). The new syntax is:

from casadi import *
x = SX.sym('x')
y = SX.sym('x',2)
f = Function('f', [x,y],[sin(x)+y], ['x', 'y'], ['r'])
print(f) # f:(x,y[2])->(r[2]) SXFunction
f.disp() # Equivalent syntax (MATLAB style)
f.disp(True) # Print algorithm

I.e. you get a list of inputs, with dimension if non-scalar, and a name of the internal class (here SXFunction). You can also get the name as a string: str(f) or f.str(). If you want to print the algorithm, pass an optional argument "True", i.e. f.str(True) or f.disp(True).

Changes to the codegen C API

The C API has seen continued improvements, in particular regarding the handling of external functions with memory allocation. See the user guide for the latest API.

Other changes

  • inv() is now more efficient for large SX/DM matrices, and is evaluatable for MX (cparse by default). The old variant is still available for SX/MX as inv_minor, and for MX as inv_node.
  • Linear solver-related defaults are now set to csparse as opposed to symbolicqr
  • In Matlab, when the CasADi result is a vector<bool>, this gets mapped to a logical matrix. E.g. which_depends is affected by this change.
  • The sum-of-squares operator is now called sumsqr instead of sum_square.
  • The API of the Linsol class has changed.
Clone this wiki locally