-
Notifications
You must be signed in to change notification settings - Fork 364
Changes v3.2.0 to v3.3.0
CasADi is now able to calculate derivatives using finite differences approximations. To enable this feature, set the "enable_fd" option to true for a function object. If the function object has built-in derivative support, you can disable it by setting the options enable_forward
, enable_reverse
and enable_jacobian
to false.
The default algorithm is a central difference scheme with automatic step-size selection based on estimates of truncation errors and roundoff errors. You can change this to a (cheaper, but less accurate) one-sided scheme by setting fd_method
to forward
or backward
. There is also an experimental discontinuity avoiding scheme (suitable if the function is differentiated near nonsmooth points that can be enable by setting fd_method
to smoothing
.
Two sparse direct linear solvers have been added to CasADi's runtime core: One based on an up-looking QR factorization, calculated using Householder reflections, and one sparse direct LDL method (square-root free variant of Cholesky). These solvers are available for both SX and MX, for MX as the linear solver plugins "qr" and "ldl", for MX as the methods "SX::qr_sparse" and "SX::ldl". They also support for C code generation (with the exception of LDL in MX).
A speed bottleneck, related to the topological sorting of large MX graphs has been identified and resolved. The complexity of the sorting algorithms is now linear in all cases.
-
A\y
andy'/A
now work in Matlab/Octave - Matrix power works
- First major release with Opti
-
shell
compiler now works on Windows, allowing to dojit
using Visual Studio - Added introspection methods
instruction_*
that work for SX/MX Functions. Seeaccessing_mx_algorithm
example to see how you can walk an MXgraph
. - Experimental feature to export SX/MX functions to pure-Matlab code.
-
DM::rand
creates a matrix with random numbers.DM::rng
controls the seeding of the random number generator.
- Python interface no longer searches for/links to Python libraries (on Linux, OSX)
- Python interface no longer depends on Numpy at compile-time; CasADi works for any numpy version now
- Python binaries and wheels have come a step closer to true
manylinux
. CasADi should now run on CentOS 5.
The default printout of Function instances is now shorter and consistent across different Function derived classes (SX/MX functions, NLP solvers, integrators, etc.). The new syntax is:
from casadi import *
x = SX.sym('x')
y = SX.sym('x',2)
f = Function('f', [x,y],[sin(x)+y], ['x', 'y'], ['r'])
print(f) # f:(x,y[2])->(r[2]) SXFunction
f.disp() # Equivalent syntax (MATLAB style)
f.disp(True) # Print algorithm
I.e. you get a list of inputs, with dimension if non-scalar, and a name of the internal class (here SXFunction).
You can also get the name as a string: str(f)
or f.str()
. If you want to print the algorithm, pass an optional argument "True", i.e. f.str(True)
or f.disp(True)
.
The C API has seen continued improvements, in particular regarding the handling of external functions with memory allocation. See the user guide for the latest API.
-
inv()
is now more efficient for largeSX
/DM
matrices, and is evaluatable forMX
(cparse
by default). The old variant is still available forSX
/MX
asinv_minor
, and forMX
asinv_node
. - Linear solver-related defaults are now set to
csparse
as opposed tosymbolicqr
- In Matlab, when the CasADi result is a
vector<bool>
, this gets mapped to a logical matrix. E.g.which_depends
is affected by this change. - The sum-of-squares operator is now called
sumsqr
instead ofsum_square
. - The API of the
Linsol
class has changed.