Releases: DiffSharp/DiffSharp
Releases · DiffSharp/DiffSharp
0.7.4
- Improved: Overall performance improvements with parallelization and memory reshaping in OpenBLAS backend
- Fixed: Bug fixes in reverse AD
Make_DM_ofDV
andDV.Append
- Fixed: Bug fixes in
DM
operationsmap2Cols
,map2Rows
,mapi2Cols
,mapi2Rows
- Added: New operation
primalDeep
for the deepest primal value in nested AD values
0.7.3
0.7.2
0.7.1
0.7.0
Version 0.7.0 is a reimplementation of the library with support for linear algebra primitives, BLAS/LAPACK, 32- and 64-bit precision and different CPU/GPU backends
- Changed: Namespaces have been reorganized and simplified. This is a breaking change. There is now just one AD implementation, under
DiffSharp.AD
(withDiffSharp.AD.Float32
andDiffSharp.AD.Float64
variants, see below). This internally makes use of forward or reverse AD as needed. - Added: Support for 32 bit (single precision) and 64 bit (double precision) floating point operations. All modules have
Float32
andFloat64
versions providing the same functionality with the specified precision. 32 bit floating point operations are significantly faster (as much as twice as fast) on many current systems. - Added: DiffSharp now uses the OpenBLAS library by default for linear algebra operations. The AD operations with the types
D
for scalars,DV
for vectors, andDM
for matrices use the underlying linear algebra backend for highly optimized native BLAS and LAPACK operations. For non-BLAS operations (such as Hadamard products and matrix transpose), parallel implementations in managed code are used. All operations with theD
,DV
, andDM
types support forward and reverse nested AD up to any level. This also paves the way for GPU backends (CUDA/CuBLAS) which will be introduced in following releases. Please see the documentation and API reference for information about how to use theD
,DV
, andDM
types. (Deprecated: The FsAlg generic linear algebra library and theVector<'T>
andMatrix<'T>
types are no longer used.) - Fixed: Reverse mode AD has been reimplemented in a tail-recursive way for better performance and preventing StackOverflow exceptions encountered in previous versions.
- Changed: The library now uses F# 4.0 (FSharp.Core 4.4.0.0).
- Changed: The library is now 64 bit only, meaning that users should set "x64" as the platform target for all build configurations.
- Fixed: Overall bug fixes.
0.6.3
0.6.2
0.6.1
0.6.0
- Changed: DiffSharp is now released under the LGPL license, allowing use (as a dynamically linked library) in closed-source projects and open-source projects under non-GPL licenses
- Added: Nesting support. The modules
DiffSharp.AD
,DiffSharp.AD.Forward
andDiffSharp.AD.Reverse
are now the main components of the library, providing support for nested AD operations. - Changed: The library now uses the FsAlg linear algebra library for handling vector and matrix operations and interfaces
- Changed: All AD-enabled numeric types in the library are now called
D
- Changed: The non-nested modules
DiffSharp.AD.Forward
,DiffSharp.AD.Forward2
,DiffSharp.AD.ForwardG
,DiffSharp.AD.ForwardGH
,DiffSharp.AD.ForwardN
,DiffSharp.AD.Reverse
are now calledDiffSharp.AD.Specialized.Forward1
,DiffSharp.AD.Specialized.Forward2
,DiffSharp.AD.Specialized.ForwardG
,DiffSharp.AD.Specialized.ForwardGH
,DiffSharp.AD.Specialized.ForwardN
,DiffSharp.AD.Specialized.Reverse1
- Improved: The non-nested
DiffSharp.AD.Specialized.Reverse1
module is reimplemented from scratch, not requiring a stack - Removed: The non-nested
DiffSharp.AD.ForwardReverse
module is removed. This functionality is now handled by the nested modules. - Improved: Major rewrite of documentation and examples, to reflect changed library structure
- Improved: Updated benchmarks