Skip to content
Commits on Jan 16, 2016
  1. allow transformers 0.6

    committed
Commits on Jan 7, 2016
  1. 4.3.2 CHANGELOG

    committed
Commits on Nov 24, 2015
  1. Merge pull request #51 from expipiplus1/no-eq

    committed
    [WIP] Implement `NoEq` variants of functions in Halley
  2. @expipiplus1
  3. @expipiplus1

    Implement `NoEq` variants of functions in Halley

    expipiplus1 committed
    Bump version number to 4.3.2 because of new functionality
    
    These are variants which don't require an instance for Eq on the type
    being operated on so they return an infinite list of results instead.
    
    The existing functions have been modified to call the more general NoEq
    variant and truncate the list returned from that with the new function
    takeWhileDifferent.
Commits on Nov 23, 2015
  1. Merge pull request #52 from expipiplus1/whitespace

    committed
    Strip trailing whitespace
Commits on Nov 22, 2015
  1. @expipiplus1

    Strip trailing whitespace

    expipiplus1 committed
Commits on Nov 10, 2015
  1. Use maxViewWithKey, don't crash

    committed
  2. version bump, tested-with

    committed
  3. Merge an nice obvious-in-retrospect idea from Björn to use the existi…

    committed
    …ng Index type rather than a list of derivatives. Better asymptotics for the usecases where Sparse is important in the first place.
  4. 4.3 version bump and CHANGELOG

    committed
  5. Bikeshed Björn's patch

    committed
  6. @sydow

    Improve the performance of Sparse mode

    sydow committed with
    This is much more difficult to test than the change to Tower.hs. It seems clear that, as expected, great speedup is achieved for high order derivatives, in particular with repeated differentiation wrt the same variable. Some preliminary evidence:
    
    First interaction
    =============
    With old Sparse.hs:
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> let ex1 = apply (exp . sin . head) [1]
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> :set +s
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 20 0) ex1
    -2.906349611135101e11
    (1.84 secs, 645,437,648 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 21 0) ex1
    -4.712478629744504e12
    (3.63 secs, 1,266,199,656 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> let ex2 = apply (\[x] -> sin x * sin (2*x) + cos x * cos (2*x)) [1]
    (0.01 secs, 2,572,416 bytes)
    (1.89 secs, 637,093,112 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 20 0) ex2
    0.5403025150299072
    (6.26 secs, 2,542,121,208 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> let ex3 = apply (\[x,y] -> sin (x-y) * sin (x+y) + cos (x-y) * cos (x+y)) [1,1]  -- cos (2*y)
    (0.01 secs, 3,093,240 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 20 1) ex3
    -436361.5852792564
    (6.14 secs, 2,542,116,600 bytes)
    
    With new Sparse.hs:
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> let ex1 = apply (exp . sin . head) [1]
    (0.06 secs, 17,904,640 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 20 0) ex1
    -2.9063496111351e11
    (0.02 secs, 13,042,128 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 21 0) ex1
    -4.712478629744501e12
    (0.01 secs, 5,192,664 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> let ex2 = apply (\[x] -> sin x * sin (2*x) + cos x * cos (2*x)) [1]
    (0.01 secs, 2,576,568 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 20 0) ex2
    0.5403022766113281
    (0.01 secs, 12,965,864 bytes)
    Prelude Numeric.AD.Internal.Sparse> let ex3 = apply (\[x,y] -> sin (x-y) * sin (x+y) + cos (x-y) * cos (x+y)) [1,1]  -- cos (2*y)
    (0.01 secs, 3,093,072 bytes)
    Prelude Numeric.AD.Internal.Sparse> partial (replicate 20 1) ex3
    -436361.58527925645
    (0.01 secs, 14,993,200 bytes)
    
    The second interaction is to show that with several variables, and differentiating wrt all/many of these, the advantage is less pronounced:
    
    Second interaction
    ===============
    With old Sparse.hs:
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> let f [v,w,x,y,z] = exp (sin (v+w+x+y+z))
    (0.00 secs, 2,062,152 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> let ex4 = apply f [1,1,1,1,1]
    (0.00 secs, 1,540,760 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 20 0) ex4
    8.437780982083302e8
    (3.95 secs, 1,549,107,656 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 10 0 ++ replicate 10 1) ex4
    8.437780982083302e8
    (7.68 secs, 2,808,649,064 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial [0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4] ex4
    8.437780982083302e8
    (16.87 secs, 6,005,888,760 bytes)
    
    With new Sparse.hs:
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> let f [v,w,x,y,z] = exp (sin (v+w+x+y+z))
    (0.01 secs, 2,060,544 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> let ex4 = apply f [1,1,1,1,1]
    (0.01 secs, 1,544,736 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 20 0) ex4
    8.437780982083299e8
    (0.02 secs, 22,233,288 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial (replicate 10 0 ++ replicate 10 1) ex4
    8.437780982083298e8
    (0.15 secs, 137,411,096 bytes)
    Prelude Numeric.AD Numeric.AD.Internal.Sparse> partial [0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4] ex4
    8.437780982083298e8
    (2.10 secs, 2,187,741,944 bytes)
    
    It is entirely understandable that the advantage is not so great in the last case, with derivatives of order 4 wrt to five
    different variables. Here the derivative in the new version is a sum of 5^5 terms.
    
    I find it difficult to make a comprehensive test suite for the new version. In particular, one would want to make sure that the new version is not inferior in any common situations.
Commits on Nov 5, 2015
  1. @glguy
Commits on Nov 3, 2015
  1. Merge pull request #48 from TomMD/feature/constrainedConvex

    committed
    Add Constrained Convex Optimization
  2. @TomMD

    Add Constrained Convex Optimization

    TomMD committed
    This tactic and the terminology is based on Boyd's book, chapter 11.3
    (logarithmic barrier function).  The hard-coded constants are unmotivated
    and could either be exposed if someone finds a suitable API or otherwise
    adjusted if someone has a grounded method for deciding these values.
Commits on Nov 2, 2015
  1. hiding (sum)

    committed
  2. @sydow
Commits on Sep 23, 2015
Commits on Aug 27, 2015
  1. Merge pull request #47 from yairchu/patch-1

    committed
    Remove stale references to removed module Numeric.AD.Types in README.markdown
  2. @yairchu
Commits on Aug 14, 2015
  1. fix 7.10 builds

    committed
Commits on Aug 9, 2015
Commits on Jul 10, 2015
  1. We've removed the TODO

    committed
  2. remove the old TODO file

    committed
  3. disable -j2. if we can't update because of an old -j2 on cabal 1.16, …

    committed
    …scrub everything and start over
Something went wrong with that request. Please try again.