Grab a binary from the table (for MATLAB, use the newest compatible version below):
|Matlab||R2016a or later,
R2013a or R2013b
|R2014b or later,
|R2015a or later,
|Octave|| 4.4.1 (32bit / 64bit),
4.4.0 (32bit / 64bit)
|Python||Py27 (32bit* / 64bit*),
Py35 (32bit* / 64bit*),
Py36 (32bit* / 64bit*),
Py37 (32bit* / 64bit*)
(*) Check your Python console if you need 32bit or 64bit - bitness should be printed at startup.
Unzip in your home directory and adapt the path:
Get started with the example pack.
Getting error "CasADi is not running from its package context." in Python? Check that you have
casadi-py27-v3.4.5/casadi/casadi.py. If you have
casadi-py27-v3.4.5/casadi.py instead, that's not good; add an extra
Credit where credit is due: Proper attribution of linear solver routines, reimplementation of code generation for linear solvers #2158, #2134
CasADi 3.3 introduced support for two sparse direct linear solvers relying based on sparse direct QR factorization and sparse direct LDL factorization, respectively. In the release notes and in the code, it was not made clear enough that part of these routines could be considered derivative works of CSparse and LDL, respectively, both under copyright of Tim Davis. In the current release, routines derived from CSparse and LDL are clearly marked as such and to be considered derivative work under LGPL. All these routines reside inside the
Since CasADi, CSparse and LDL all have the same open-source license (LGPL), this will not introduce any additional restrictions for users.
Since C code generated from CasADi is not LGPL (allowing CasADi users to use the generated code freely), all CSparse and LDL derived routines have been removed or replaced in CasADi's C runtime. This means that code generation for CasADi's 'qr' and 'ldl' is now possible without any additional license restrictions. A number of bugs have also been resolved.
Parametric sensitivity for NLP solvers #724
CasADi 3.4 introduces differentiability for NLP solver instances in CasADi. Derivatives can be calculated efficiently with either forward or reverse mode algorithmic differentiation. We will detail this functionality in future publications, but in the meantime, feel free to reach out to Joel if you have questions about the functionality. The implementation is based on using derivative propagation rules to the implicit function theorem, applied to the nonlinear KKT system. It is part of the NLP solver base class and should in principle work with any NLP solver, although the factorization and solution of the KKT system (based on the sparse QR above) is likely to be a speed bottle neck in applications. The derivative calculations also depend on accurate Lagrange multipliers to be available, in particular with the correct signs for all multipliers. Functions for calculating parametric sensitivities for a particular system can be C code generated.
A primal-dual active set method for quadratic programming
The parametric sensitivity analysis for NLP solvers, detailed above, is only as good as the multipliers you provide to it. Multipliers from an interior point method such as IPOPT are usually not accurate enough to be used for the parametric sensitivity analysis, which in particular relies on knowledge of the active set. For this reason, we have started work on a primal-dual active set method for quadratic programming. The method relies on the same factorization of the linearized KKT system as the parametric sensitivity analysis and will support C code generation. The solver is available as the "activeset" plugin in CasADi. The method is still work-in-progress and in particular performs poorly if the Hessian matrix is not strictly positive definite.
Changes in Opti
describemethods in Matlab now follows index-1 based convention.
show_infeasibilitiesto help debugging infeasible problems.
Changes in existing functionality
- Some CasADi operations failed when the product of rows and columns of a matrix was larger then
This limit has been raised to
2^63-1by changing CasADi integer types to
The change is hidden for Python/Octave/Matlab users, but C++ users may be affected.
- Fixed various bottlenecks in large scale MX Function initialization
- Non-zero location reports for NaN/Inf now follow index-1 based convention in Matlab interface.
- SX Functions can be serialized/pickled/saved now.
for-loop equivalentsto the users guide
- New backend for parallel maps: "thread" target, shipped in the binaries.
- Uniform 'success' flag in
evalffunction to numerically evaluate an SX/MX matrix that does not depend on any symbols
cumsum(follows the Matlab convention)
- Added a rootfinder plugin ('fast_newton') that can code-generate
- Added binary search for Linear/BSpline Interpolant. Used by default for grid dimensions (>=100)
- Binaries now come with a large set of plugins enabled
- Binaries ship with "thread" parallelization
- Binaries are hosted on Github instead of Sourceforge
- Default build mode is
Releasemode once again (as was always intended)
- CasADi passes with