Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
TMB uses the following BLAS kernels when calculating function value and derivatives
If your model spends a significant amount of time in these BLAS operations you may benefit from an optimized BLAS library e.g. MKL or OpenBLAS for CPU or nvblas for GPU. For a good result it's critical that
- All required BLAS kernels are part of the library (currently not the case for nvblas ? ).
- The library should not add significant overhead for small matrices (OPENBLAS have had problems - is it still the case ? ).