@gbaydin gbaydin released this Dec 25, 2015 · 76 commits to master since this release

Assets 6
  • Fixed: Bug fix in forward AD implementation of Sigmoid and ReLU for D, DV, and DM (fixes #16, thank you @mrakgr )
  • Improvement: Performance improvement by removing several more Parallel.For and Array.Parallel.map operations, working better with OpenBLAS multithreading
  • Added: Operations involving incompatible dimensions of DV and DM will now throw exceptions for warning the user

@gbaydin gbaydin released this Dec 15, 2015 · 88 commits to master since this release

Assets 6
  • Fixed: Bug fix in LAPACK wrappers ssysv and dsysv in the OpenBLAS backend that caused incorrect solution for linear systems described by a symmetric matrix (fixes #11, thank you @grek142)
  • Added: Added unit tests covering the whole backend interface

@gbaydin gbaydin released this Dec 6, 2015 · 111 commits to master since this release

Assets 6
  • Improved: Performance improvement thanks to faster Array2D.copy operations (thank you Don Syme @dsyme)
  • Improved: Significantly faster matrix transposition using extended BLAS operations cblas_?omatcopy provided by OpenBLAS
  • Improved: Performance improvement by disabling parts of the OpenBLAS backend using System.Threading.Tasks, which was interfering with OpenBLAS multithreading. Pending further tests.
  • Update: Updated the Win64 binaries of OpenBLAS to version 0.2.15 (27-10-2015), which has bug fixes and optimizations. Change log here
  • Fixed: Bug fixes in reverse AD operations Sub_D_DV and Sub_D_DM (fixes #8, thank you @mrakgr)
  • Fixed: Fixed bug in the benchmarking module causing incorrect reporting of the overhead factor of the AD grad operation
  • Improved: Documentation updates

@gbaydin gbaydin released this Oct 13, 2015 · 124 commits to master since this release

Assets 6
  • Improved: Overall performance improvements with parallelization and memory reshaping in OpenBLAS backend
  • Fixed: Bug fixes in reverse AD Make_DM_ofDV and DV.Append
  • Fixed: Bug fixes in DM operations map2Cols, map2Rows, mapi2Cols, mapi2Rows
  • Added: New operation primalDeep for the deepest primal value in nested AD values

@gbaydin gbaydin released this Oct 6, 2015 · 136 commits to master since this release

Assets 6
  • Fixed: Bug fix in DM.Min
  • Added: Mean, Variance, StandardDev, Normalize, and Standardize functions
  • Added: Support for visualizations with configurable Unicode/ASCII palette and contrast

@gbaydin gbaydin released this Oct 4, 2015 · 143 commits to master since this release

Assets 6
  • Fixed: Bug fixes for reverse AD Abs, Sign, Floor, Ceil, Round, DV.AddSubVector, Make_DM_ofDs, Mul_Out_V_V, Mul_DVCons_D
  • Added: New methods DV.isEmpty and DM.isEmpty

@gbaydin gbaydin released this Sep 29, 2015 · 154 commits to master since this release

Assets 6

Version 0.7.0 is a reimplementation of the library with support for linear algebra primitives, BLAS/LAPACK, 32- and 64-bit precision and different CPU/GPU backends

  • Changed: Namespaces have been reorganized and simplified. This is a breaking change. There is now just one AD implementation, under DiffSharp.AD (with DiffSharp.AD.Float32 and DiffSharp.AD.Float64 variants, see below). This internally makes use of forward or reverse AD as needed.
  • Added: Support for 32 bit (single precision) and 64 bit (double precision) floating point operations. All modules have Float32 and Float64 versions providing the same functionality with the specified precision. 32 bit floating point operations are significantly faster (as much as twice as fast) on many current systems.
  • Added: DiffSharp now uses the OpenBLAS library by default for linear algebra operations. The AD operations with the types D for scalars, DV for vectors, and DM for matrices use the underlying linear algebra backend for highly optimized native BLAS and LAPACK operations. For non-BLAS operations (such as Hadamard products and matrix transpose), parallel implementations in managed code are used. All operations with the D, DV, and DM types support forward and reverse nested AD up to any level. This also paves the way for GPU backends (CUDA/CuBLAS) which will be introduced in following releases. Please see the documentation and API reference for information about how to use the D, DV, and DM types. (Deprecated: The FsAlg generic linear algebra library and the Vector<'T> and Matrix<'T> types are no longer used.)
  • Fixed: Reverse mode AD has been reimplemented in a tail-recursive way for better performance and preventing StackOverflow exceptions encountered in previous versions.
  • Changed: The library now uses F# 4.0 (FSharp.Core
  • Changed: The library is now 64 bit only, meaning that users should set "x64" as the platform target for all build configurations.
  • Fixed: Overall bug fixes.

@gbaydin gbaydin released this Jul 18, 2015 · 193 commits to master since this release

Assets 4
  • Fixed: Bug fix in DiffSharp.AD subtraction operation between D and DF