Skip to content

Commit

Permalink
updated and documented
Browse files Browse the repository at this point in the history
  • Loading branch information
ChrisRackauckas committed Feb 15, 2018
1 parent 86f03db commit c2bc2b1
Show file tree
Hide file tree
Showing 3 changed files with 168 additions and 6 deletions.
166 changes: 164 additions & 2 deletions README.md
Expand Up @@ -8,6 +8,168 @@
[![codecov.io](http://codecov.io/github/ChrisRackauckas/DiffEqDiffTools.jl/coverage.svg?branch=master)](http://codecov.io/github/ChrisRackauckas/DiffEqDiffTools.jl?branch=master)
[![DiffEqDiffTools](http://pkg.julialang.org/badges/DiffEqDiffTools_0.6.svg)](http://pkg.julialang.org/?pkg=DiffEqDiffTools)

DiffEqDiffTools.jl is a component package in the DifferentialEquations ecosystem. It holds the common tools for taking derivatives, Jacobians, etc. and utilizing the traits from the ParameterizedFunctions when possible for increasing the speed of calculations. Users interested in using this functionality should check out [DifferentialEquations.jl](https://github.com/JuliaDiffEq/DifferentialEquations.jl/blob/master/src/DifferentialEquations.jl).
DiffEqDiffTools.jl is a component package in the DifferentialEquations ecosystem.
It holds the common tools for taking derivatives, Jacobians, etc. and utilizing
the traits from the ParameterizedFunctions when possible for increasing the
speed of calculations. Users interested in using this functionality should check
out [DifferentialEquations.jl](https://github.com/JuliaDiffEq/DifferentialEquations.jl/blob/master/src/DifferentialEquations.jl).

Note: This is currently a work in progress. Anyways, it will be behind the scenes. If you're interesting in helping develop it, please contact Chris Rackauckas.
## General Structure

The general structure of the library is as follows. You can call the differencing
functions directly and this will allocate a temporary cache to solve the problem
with. To make this non-allocating for repeat calls, you can call the cache
construction functions. Each cache construction function has two possibilities:
one version where you give it prototype arrays and it generates the cache
variables, and one fully non-allocating version where you give it the cache
variables. This is summarized as:

- Just want a quick derivative? Calculating once? Call the differencing function.
- Going to calculate the derivative multiple times but don't have cache arrays
around? Use the allocating cache and then pass this into the differencing
function (this will allocate only in the one cache construction).
- Have cache variables around from your own algorithm and want to re-use them
in the differencing functions? Use the non-allocating cache construction
and pass the cache to the differencing function.

## Scalar Derivatives

```julia
finite_difference_derivative(f, x::T, fdtype::Type{T1}=Val{:central},
returntype::Type{T2}=eltype(x), f_x::Union{Void,T}=nothing)
```

## Multi-Point Derivatives

### Differencing Calls

```julia
# Cache-less but non-allocating if `fx` and `epsilon` are supplied
# fx must be f(x)
finite_difference_derivative(
f,
x :: AbstractArray{<:Number},
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(x), # return type of f
fx :: Union{Void,AbstractArray{<:Number}} = nothing,
epsilon :: Union{Void,AbstractArray{<:Real}} = nothing)

finite_difference_derivative!(
df :: AbstractArray{<:Number},
f,
x :: AbstractArray{<:Number},
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(x),
fx :: Union{Void,AbstractArray{<:Number}} = nothing,
epsilon :: Union{Void,AbstractArray{<:Real}} = nothing)

# Cached
finite_difference_derivative!(df::AbstractArray{<:Number}, f,
x::AbstractArray{<:Number},
cache::DerivativeCache{T1,T2,fdtype,returntype})
```

### Allocating and Non-Allocating Constructor

```julia
DerivativeCache(
x :: AbstractArray{<:Number},
fx :: Union{Void,AbstractArray{<:Number}} = nothing,
epsilon :: Union{Void,AbstractArray{<:Real}} = nothing,
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(x))
```

This allocates either `fx` or `epsilon` if these are nothing and they are needed.
`fx` is the current call of `f(x)` and is required for forward-differencing
(otherwise is not necessary).

## Gradients

### Differencing Calls

```julia
# Cache-less
finite_difference_gradient(f, x, fdtype::Type{T1}=Val{:central},
returntype::Type{T2}=eltype(x),
inplace::Type{Val{T3}}=Val{true})
finite_difference_gradient!(df, f, x, fdtype::Type{T1}=Val{:central},
returntype::Type{T2}=eltype(df),
inplace::Type{Val{T3}}=Val{true})

# Cached
finite_difference_gradient!(df::AbstractArray{<:Number}, f,
x::AbstractArray{<:Number},
cache::GradientCache)
```

### Allocating Cache Constructor

```julia
GradientCache(
df :: Union{<:Number,AbstractArray{<:Number}},
x :: Union{<:Number, AbstractArray{<:Number}},
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(df),
inplace :: Type{Val{T3}} = Val{true})
```

### Non-Allocating Cache Constructor

```julia
GradientCache(
c1 :: Union{Void,AbstractArray{<:Number}},
c2 :: Union{Void,AbstractArray{<:Number}},
fx :: Union{Void,<:Number,AbstractArray{<:Number}} = nothing,
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(df),
inplace :: Type{Val{T3}} = Val{true})
```

Note that here `fx` is a cached function call of `f`. If you provide `fx`, then
`fx` will be used in the forward differencing method to skip a function call.
It is on you to make sure that you update `cache.fx` every time before
calling `finite_difference_gradient!`. A good use of this is if you have a
cache array for the output of `fx` already being used, you can make it alias
into the differencing algorithm here.

## Jacobians

### Differencing Calls

```julia
# Cache-less
finite_difference_jacobian(f, x::AbstractArray{<:Number},
fdtype :: Type{T1}=Val{:central},
returntype :: Type{T2}=eltype(x),
inplace :: Type{Val{T3}}=Val{true})

# Cached
finite_difference_jacobian(f,x,cache::JacobianCache)
finite_difference_jacobian!(J::AbstractMatrix{<:Number},f,
x::AbstractArray{<:Number},cache::JacobianCache)
```

### Allocating Cache Constructor

```julia
JacobianCache(
x,
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(x),
inplace :: Type{Val{T3}} = Val{true})
```

This assumes the Jacobian is square.

### Non-Allocating Cache Constructor

```julia
JacobianCache(
x1 ,
fx ,
fx1,
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(fx),
inplace :: Type{Val{T3}} = Val{true})
```
5 changes: 2 additions & 3 deletions src/gradients.jl
Expand Up @@ -50,9 +50,9 @@ function GradientCache(
end

function GradientCache(
c1 :: Union{Void,AbstractArray{<:Number}},
c2 :: Union{Void,AbstractArray{<:Number}},
fx :: Union{Void,<:Number,AbstractArray{<:Number}} = nothing,
c1 :: Union{Void,AbstractArray{<:Number}} = nothing,
c2 :: Union{Void,AbstractArray{<:Number}} = nothing,
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(df),
inplace :: Type{Val{T3}} = Val{true}) where {T1,T2,T3}
Expand Down Expand Up @@ -249,7 +249,6 @@ function finite_difference_gradient!(df::StridedVector{<:Number}, f, x::StridedV
fx0 = f(x)
x[i] += epsilon
dfi = (f(x) - fx0) / epsilon
@show dfi
x[i] = x_old
end

Expand Down
3 changes: 2 additions & 1 deletion src/jacobians.jl
Expand Up @@ -99,7 +99,8 @@ function finite_difference_jacobian(f,x,cache::JacobianCache)
J
end

function finite_difference_jacobian!(J::AbstractMatrix{<:Number}, f,x::AbstractArray{<:Number},
function finite_difference_jacobian!(J::AbstractMatrix{<:Number},
f,x::AbstractArray{<:Number},
cache::JacobianCache{T1,T2,T3,fdtype,returntype,inplace}) where {T1,T2,T3,fdtype,returntype,inplace}

m, n = size(J)
Expand Down

0 comments on commit c2bc2b1

Please sign in to comment.