Be notified of new releases
Create your free GitHub account today to subscribe to this repository for new releases and build software alongside 28 million developers.Sign up
- Fixed: Bug fix in forward AD implementation of
DM(fixes #16, thank you @mrakgr )
- Improvement: Performance improvement by removing several more
Array.Parallel.mapoperations, working better with OpenBLAS multithreading
- Added: Operations involving incompatible dimensions of
DMwill now throw exceptions for warning the user
- Improved: Performance improvement thanks to faster
Array2D.copyoperations (thank you Don Syme @dsyme)
- Improved: Significantly faster matrix transposition using extended BLAS operations
cblas_?omatcopyprovided by OpenBLAS
- Improved: Performance improvement by disabling parts of the OpenBLAS backend using
System.Threading.Tasks, which was interfering with OpenBLAS multithreading. Pending further tests.
- Update: Updated the Win64 binaries of OpenBLAS to version 0.2.15 (27-10-2015), which has bug fixes and optimizations. Change log here
- Fixed: Bug fixes in reverse AD operations
Sub_D_DM(fixes #8, thank you @mrakgr)
- Fixed: Fixed bug in the benchmarking module causing incorrect reporting of the overhead factor of the AD
- Improved: Documentation updates
- Improved: Overall performance improvements with parallelization and memory reshaping in OpenBLAS backend
- Fixed: Bug fixes in reverse AD
- Fixed: Bug fixes in
- Added: New operation
primalDeepfor the deepest primal value in nested AD values
- Fixed: Bug fix in
- Added: Support for visualizations with configurable Unicode/ASCII palette and contrast
- Added: Fast reshape operations
- Fixed: Bug fixes for reverse AD
- Added: New methods
Version 0.7.0 is a reimplementation of the library with support for linear algebra primitives, BLAS/LAPACK, 32- and 64-bit precision and different CPU/GPU backends
- Changed: Namespaces have been reorganized and simplified. This is a breaking change. There is now just one AD implementation, under
DiffSharp.AD.Float64variants, see below). This internally makes use of forward or reverse AD as needed.
- Added: Support for 32 bit (single precision) and 64 bit (double precision) floating point operations. All modules have
Float64versions providing the same functionality with the specified precision. 32 bit floating point operations are significantly faster (as much as twice as fast) on many current systems.
- Added: DiffSharp now uses the OpenBLAS library by default for linear algebra operations. The AD operations with the types
DVfor vectors, and
DMfor matrices use the underlying linear algebra backend for highly optimized native BLAS and LAPACK operations. For non-BLAS operations (such as Hadamard products and matrix transpose), parallel implementations in managed code are used. All operations with the
DMtypes support forward and reverse nested AD up to any level. This also paves the way for GPU backends (CUDA/CuBLAS) which will be introduced in following releases. Please see the documentation and API reference for information about how to use the
DMtypes. (Deprecated: The FsAlg generic linear algebra library and the
Matrix<'T>types are no longer used.)
- Fixed: Reverse mode AD has been reimplemented in a tail-recursive way for better performance and preventing StackOverflow exceptions encountered in previous versions.
- Changed: The library now uses F# 4.0 (FSharp.Core 184.108.40.206).
- Changed: The library is now 64 bit only, meaning that users should set "x64" as the platform target for all build configurations.
- Fixed: Overall bug fixes.