Skip to content

Commit

Permalink
Merge 69dc835 into 28bab4e
Browse files Browse the repository at this point in the history
  • Loading branch information
jrevels committed Aug 12, 2015
2 parents 28bab4e + 69dc835 commit 4d6ca84
Show file tree
Hide file tree
Showing 23 changed files with 1,298 additions and 863 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
[![Build Status](https://travis-ci.org/JuliaDiff/ForwardDiff.jl.svg?branch=nduals-refactor)](https://travis-ci.org/JuliaDiff/ForwardDiff.jl) [![Coverage Status](https://coveralls.io/repos/JuliaDiff/ForwardDiff.jl/badge.svg?branch=nduals-refactor&service=github)](https://coveralls.io/github/JuliaDiff/ForwardDiff.jl?branch=nduals-refactor)
[![Build Status](https://travis-ci.org/JuliaDiff/ForwardDiff.jl.svg?branch=api-refactor)](https://travis-ci.org/JuliaDiff/ForwardDiff.jl) [![Coverage Status](https://coveralls.io/repos/JuliaDiff/ForwardDiff.jl/badge.svg?branch=api-refactor&service=github)](https://coveralls.io/github/JuliaDiff/ForwardDiff.jl?branch=api-refactor)

# ForwardDiff.jl

The `ForwardDiff` package provides a type-based implementation of forward mode automatic differentiation (FAD) in Julia. [The wikipedia page on automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) is a useful resource for learning about the advantages of FAD techniques over other common differentiation methods (such as [finite differencing](https://en.wikipedia.org/wiki/Numerical_differentiation)).

## What can I do with this package?

This package contains methods to efficiently take derivatives, Jacobians, and Hessians of native Julia functions (or any callable object, really). While performance varies depending on the functions you evaluate, this package generally outperforms non-AD methods in memory usage, speed, and accuracy.
This package contains methods to take derivatives, gradients, Jacobians, and Hessians of native Julia functions (or any callable object, really). While performance varies depending on the functions you evaluate, this package generally outperforms non-AD methods in memory usage, speed, and accuracy.

A third-order generalization of the Hessian is also implemented (see `tensor` below).

Expand All @@ -15,7 +15,7 @@ For now, we only support for functions involving `T<:Real`s, but we believe exte
## Usage

---
#### Derivative of `f: R → R` or `f: R → Rᵐ¹ × Rᵐ² × ⋯ × Rᵐⁱ`
#### Derivative of `f(x::Number) → Number` or `f(x::Number) → Array`
---

- **`derivative!(output::Array, f, x::Number)`**
Expand All @@ -31,7 +31,7 @@ For now, we only support for functions involving `T<:Real`s, but we believe exte
Return the function `f'`. If `mutates=false`, then the returned function has the form `derivf(x) -> derivative(f, x)`. If `mutates = true`, then the returned function has the form `derivf!(output, x) -> derivative!(output, f, x)`.

---
#### Gradient of `f: Rⁿ → R`
#### Gradient of `f(x::Vector) → Number`
---

- **`gradient!(output::Vector, f, x::Vector)`**
Expand All @@ -47,7 +47,7 @@ For now, we only support for functions involving `T<:Real`s, but we believe exte
Return the function `∇f`. If `mutates=false`, then the returned function has the form `gradf(x) -> gradient(f, x)`. If `mutates = true`, then the returned function has the form `gradf!(output, x) -> gradient!(output, f, x)`. By default, `mutates` is set to `false`. `ForwardDiff` must be used as a qualifier when calling `gradient` to avoid conflict with `Base.gradient`.

---
#### Jacobian of `f: Rⁿ → Rᵐ`
#### Jacobian of `f(x:Vector) → Vector`
---

- **`jacobian!(output::Matrix, f, x::Vector)`**
Expand All @@ -63,7 +63,7 @@ For now, we only support for functions involving `T<:Real`s, but we believe exte
Return the function `J(f)`. If `mutates=false`, then the returned function has the form `jacf(x) -> jacobian(f, x)`. If `mutates = true`, then the returned function has the form `jacf!(output, x) -> jacobian!(output, f, x)`. By default, `mutates` is set to `false`.

---
#### Hessian of `f: Rⁿ → R`
#### Hessian of `f(x::Vector) → Number`
---

- **`hessian!(output::Matrix, f, x::Vector)`**
Expand All @@ -79,7 +79,7 @@ For now, we only support for functions involving `T<:Real`s, but we believe exte
Return the function `H(f)`. If `mutates=false`, then the returned function has the form `hessf(x) -> hessian(f, x, S)`. If `mutates = true`, then the returned function has the form `hessf!(output, x) -> hessian!(output, f, x)`. By default, `mutates` is set to `false`.

---
#### Third-order Taylor series term of `f: Rⁿ → R`
#### Third-order Taylor series term of `f(x::Vector) → Number`
---

[This Math StackExchange post](http://math.stackexchange.com/questions/556951/third-order-term-in-taylor-series) actually has an answer that explains this term fairly clearly.
Expand Down
17 changes: 10 additions & 7 deletions src/ForwardDiff.jl
Original file line number Diff line number Diff line change
Expand Up @@ -35,12 +35,11 @@ module ForwardDiff
# @eval import Base.$(fsym);
# end

include("ForwardDiffNum.jl")
include("GradientNum.jl")
include("HessianNum.jl")
include("TensorNum.jl")
include("fad_api.jl")
include("deprecated.jl")
include("ForwardDiffNumber.jl")
include("GradientNumber.jl")
include("HessianNumber.jl")
include("TensorNumber.jl")
include("fad_api/fad_api.jl")

export derivative!,
derivative,
Expand All @@ -50,6 +49,10 @@ module ForwardDiff
hessian!,
hessian,
tensor!,
tensor
tensor,
GradientCache,
JacobianCache,
HessianCache,
TensorCache

end # module ForwardDiff
73 changes: 0 additions & 73 deletions src/ForwardDiffNum.jl

This file was deleted.

73 changes: 73 additions & 0 deletions src/ForwardDiffNumber.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
abstract ForwardDiffNumber{N,T<:Number,C} <: Number

# Subtypes F<:ForwardDiffNumber should define:
# npartials(::Type{F}) --> N from ForwardDiffNumber{N,T,C}
# eltype(::Type{F}) --> T from ForwardDiffNumber{N,T,C}
# value(n::F) --> the value of n
# grad(n::F) --> a container corresponding to all first order partials
# hess(n::F) --> a container corresponding to the lower
# triangular half of the symmetric
# Hessian (including the diagonal)
# tens(n::F) --> a container of corresponding to lower
# tetrahedral half of the symmetric
# Tensor (including the diagonal)
# isconstant(n::F) --> returns true if all partials stored by n are zero
#
#...as well as:
# ==(a::F, b::F)
# isequal(a::F, b::F)
# zero(::Type{F})
# one(::Type{F})
# rand(::Type{F})
# hash(n::F)
# read(io::IO, ::Type{F})
# write(io::IO, n::F)
# conversion/promotion rules

##############################
# Utility/Accessor Functions #
##############################
halfhesslen(n) = div(n*(n+1),2) # correct length(hess(::ForwardDiffNumber))
halftenslen(n) = div(n*(n+1)*(n+2),6) # correct length(tens(::ForwardDiffNumber))

switch_eltype{T,S}(::Type{Vector{T}}, ::Type{S}) = Vector{S}
switch_eltype{N,T,S}(::Type{NTuple{N,T}}, ::Type{S}) = NTuple{N,S}

grad(n::ForwardDiffNumber, i) = grad(n)[i]
hess(n::ForwardDiffNumber, i) = hess(n)[i]
tens(n::ForwardDiffNumber, i) = tens(n)[i]

npartials{N}(::ForwardDiffNumber{N}) = N
eltype{N,T}(::ForwardDiffNumber{N,T}) = T

npartials{N,T,C}(::Type{ForwardDiffNumber{N,T,C}}) = N
eltype{N,T,C}(::Type{ForwardDiffNumber{N,T,C}}) = T

zero(n::ForwardDiffNumber) = zero(typeof(n))
one(n::ForwardDiffNumber) = one(typeof(n))

==(n::ForwardDiffNumber, x::Real) = isconstant(n) && (value(n) == x)
==(x::Real, n::ForwardDiffNumber) = ==(n, x)

isequal(n::ForwardDiffNumber, x::Real) = isconstant(n) && isequal(value(n), x)
isequal(x::Real, n::ForwardDiffNumber) = isequal(n, x)

isless(a::ForwardDiffNumber, b::ForwardDiffNumber) = value(a) < value(b)
isless(x::Real, n::ForwardDiffNumber) = x < value(n)
isless(n::ForwardDiffNumber, x::Real) = value(n) < x

copy(n::ForwardDiffNumber) = n # assumes all types of ForwardDiffNumbers are immutable

eps(n::ForwardDiffNumber) = eps(value(n))
eps{F<:ForwardDiffNumber}(::Type{F}) = eps(eltype(F))

isnan(n::ForwardDiffNumber) = isnan(value(n))
isfinite(n::ForwardDiffNumber) = isfinite(value(n))
isreal(n::ForwardDiffNumber) = isconstant(n)

##################
# Math Functions #
##################
conj(n::ForwardDiffNumber) = n
transpose(n::ForwardDiffNumber) = n
ctranspose(n::ForwardDiffNumber) = n
114 changes: 0 additions & 114 deletions src/GradientNum.jl

This file was deleted.

0 comments on commit 4d6ca84

Please sign in to comment.