Skip to content

Commit

Permalink
Create more docs examples and link from README (#114)
Browse files Browse the repository at this point in the history
* Add README examples to docs

* Update README with docs links

* Don't execute examples when on Travis
  • Loading branch information
anriseth committed Jun 2, 2018
1 parent f8819a9 commit ae37e05
Show file tree
Hide file tree
Showing 8 changed files with 150 additions and 135 deletions.
141 changes: 14 additions & 127 deletions README.md
Expand Up @@ -2,14 +2,15 @@

[![Build Status](https://travis-ci.org/JuliaNLSolvers/LineSearches.jl.svg?branch=master)](https://travis-ci.org/JuliaNLSolvers/LineSearches.jl)
[![Codecov branch](https://img.shields.io/codecov/c/github/JuliaNLSolvers/LineSearches.jl/master.svg?maxAge=2592000)](https://codecov.io/gh/JuliaNLSolvers/LineSearches.jl)
[![][docs-stable-img]][docs-stable-url]

## Description
This package provides an interface to line search algorithms implemented in Julia.
The code was originally written as part of [Optim](https://github.com/JuliaNLSolvers/Optim.jl),
but has now been separated out to its own package.

### Available line search algorithms
In Example 1 we show how to choose between the line search algorithms
In [the docs](https://julianlsolvers.github.io/LineSearches.jl/latest/examples/generated/optim_linesearch.html) we show how to choose between the line search algorithms
in `Optim`.
* `HagerZhang` (Taken from the Conjugate Gradient implementation
by Hager and Zhang, 2006)
Expand All @@ -20,7 +21,7 @@ in `Optim`.

### Available initial step length procedures
The package provides some procedures to calculate the initial step
length that is passed to the line search algorithm. See Example 2 for
length that is passed to the line search algorithm. See [the docs](https://julianlsolvers.github.io/LineSearches.jl/latest/examples/generated/optim_initialstep.html) for
its usage in `Optim`.
* `InitialPrevious` (Use the step length from the previous
optimization iteration)
Expand All @@ -32,134 +33,20 @@ its usage in `Optim`.
constant change in step length)


## Example 1
This example shows how to use `LineSearches` with `Optim`.
We solve the Rosenbrock problem with two different line search algorithms.

First, run `Newton` with the default line search algorithm:
```julia
using Optim, LineSearches
prob = Optim.UnconstrainedProblems.examples["Rosenbrock"]

algo_hz = Newton(linesearch = HagerZhang())
res_hz = Optim.optimize(prob.f, prob.g!, prob.h!, prob.initial_x, method=algo_hz)
```

This gives the result
``` julia
Results of Optimization Algorithm
* Algorithm: Newton's Method
* Starting Point: [0.0,0.0]
* Minimizer: [0.9999999999999994,0.9999999999999989]
* Minimum: 3.081488e-31
* Iterations: 14
* Convergence: true
* |x - x'| 1.0e-32: false
|x - x'| = 3.06e-09
* |f(x) - f(x')| 1.0e-32 |f(x)|: false
|f(x) - f(x')| = 3.03e+13 |f(x)|
* |g(x)| 1.0e-08: true
|g(x)| = 1.11e-15
* Stopped by an increasing objective: false
* Reached Maximum Number of Iterations: false
* Objective Calls: 44
* Gradient Calls: 44
* Hessian Calls: 14
```

Now we can try `Newton` with the cubic backtracking line search:
``` julia
algo_bt3 = Newton(linesearch = BackTracking(order=3))
res_bt3 = Optim.optimize(prob.f, prob.g!, prob.h!, prob.initial_x, method=algo_bt3)
```

This gives the following result, reducing the number of function and gradient calls:
``` julia
Results of Optimization Algorithm
* Algorithm: Newton's Method
* Starting Point: [0.0,0.0]
* Minimizer: [0.9999999959215587,0.9999999918223065]
* Minimum: 1.667699e-17
* Iterations: 14
* Convergence: true
* |x - x'| 1.0e-32: false
|x - x'| = 1.36e-05
* |f(x) - f(x')| 1.0e-32 |f(x)|: false
|f(x) - f(x')| = 1.21e+08 |f(x)|
* |g(x)| 1.0e-08: true
|g(x)| = 4.16e-09
* Stopped by an increasing objective: false
* Reached Maximum Number of Iterations: false
* Objective Calls: 19
* Gradient Calls: 15
* Hessian Calls: 14
```

## Example 2
This example shows how to use the initial step length procedures with `Optim`.
We solve the Rosenbrock problem with two different procedures.

First, run `Newton` with the (default) initial guess and line search procedures.
```julia
using Optim, LineSearches
prob = Optim.UnconstrainedProblems.examples["Rosenbrock"]

algo_st = Newton(alphaguess = InitialStatic(), linesearch = HagerZhang())
res_st = Optim.optimize(prob.f, prob.g!, prob.h!, prob.initial_x, method=algo_st)
```

This gives the result
``` julia
Results of Optimization Algorithm
* Algorithm: Newton's Method
* Starting Point: [0.0,0.0]
* Minimizer: [0.9999999999999994,0.9999999999999989]
* Minimum: 3.081488e-31
* Iterations: 14
* Convergence: true
* |x - x'| 1.0e-32: false
|x - x'| = 3.06e-09
* |f(x) - f(x')| 1.0e-32 |f(x)|: false
|f(x) - f(x')| = 3.03e+13 |f(x)|
* |g(x)| 1.0e-08: true
|g(x)| = 1.11e-15
* Stopped by an increasing objective: false
* Reached Maximum Number of Iterations: false
* Objective Calls: 44
* Gradient Calls: 44
* Hessian Calls: 14
```

We can now try with the initial step length guess from Hager and Zhang.
``` julia
algo_prev = Newton(alphaguess = InitialHagerZhang(α0=1.0), linesearch = HagerZhang())
res_prev = Optim.optimize(prob.f, prob.g!, prob.h!, prob.initial_x, method=algo_prev)
```

This gives the following result, reducing the number of function and gradient calls, but increasing the number of iterations.
``` julia
Results of Optimization Algorithm
* Algorithm: Newton's Method
* Starting Point: [0.0,0.0]
* Minimizer: [0.9999999974436653,0.9999999948855858]
* Minimum: 6.535152e-18
* Iterations: 15
* Convergence: true
* |x - x'| 1.0e-32: false
|x - x'| = 1.09e-05
* |f(x) - f(x')| 1.0e-32 |f(x)|: false
|f(x) - f(x')| = 8.61e+08 |f(x)|
* |g(x)| 1.0e-08: true
|g(x)| = 4.41e-09
* Stopped by an increasing objective: false
* Reached Maximum Number of Iterations: false
* Objective Calls: 36
* Gradient Calls: 21
* Hessian Calls: 15
```
## Documentation
For more details and options, see the documentation
- [STABLE][docs-stable-url] — most recently tagged version of the documentation.
- [LATEST][docs-latest-url] — in-development version of the documentation.


## References
- W. W. Hager and H. Zhang (2006) "Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent." ACM Transactions on Mathematical Software 32: 113-137.
- Moré, Jorge J., and David J. Thuente. "Line search algorithms with guaranteed sufficient decrease." ACM Transactions on Mathematical Software (TOMS) 20.3 (1994): 286-307.
- Nocedal, Jorge, and Stephen Wright. "Numerical optimization." Springer Science & Business Media, 2006.


[docs-latest-img]: https://img.shields.io/badge/docs-latest-blue.svg
[docs-latest-url]: https://julianlsolvers.github.io/LineSearches.jl/latest

[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://julianlsolvers.github.io/LineSearches.jl/stable
14 changes: 12 additions & 2 deletions docs/generate.jl
@@ -1,13 +1,23 @@
# generate examples
import Literate

# We shouldn't run the examples that require Optim in Travis/CI,
# because an update in LineSearches may be breaking with the
# most recently tagged Optim version.
if get(ENV, "CI", "") == "true"
ONLYSTATIC = ["optim_linesearch.jl", "optim_initialstep.jl"]
else
ONLYSTATIC = ["",]
end

EXAMPLEDIR = joinpath(@__DIR__, "src", "examples")
GENERATEDDIR = joinpath(@__DIR__, "src", "examples", "generated")
for example in filter!(r"\.jl$", readdir(EXAMPLEDIR))
input = abspath(joinpath(EXAMPLEDIR, example))
script = Literate.script(input, GENERATEDDIR)
code = strip(read(script, String))
mdpost(str) = replace(str, "@__CODE__" => code)
Literate.markdown(input, GENERATEDDIR, postprocess = mdpost)
Literate.notebook(input, GENERATEDDIR, execute = true)
Literate.markdown(input, GENERATEDDIR, postprocess = mdpost,
documenter = !(example in ONLYSTATIC))
Literate.notebook(input, GENERATEDDIR, execute = !(example in ONLYSTATIC))
end
2 changes: 1 addition & 1 deletion docs/make.jl
Expand Up @@ -4,7 +4,7 @@ using Documenter, LineSearches
include("generate.jl")

GENERATEDEXAMPLES = [joinpath("examples", "generated", f) for f in (
"customoptimizer.md",)]
"customoptimizer.md", "optim_linesearch.md", "optim_initialstep.md")]

# Build documentation.
makedocs(
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/customoptimizer.jl
Expand Up @@ -107,7 +107,7 @@ function fg!(gvec, x)
end


# We can now use `gdoptimize` with `BackTracking to optimize the Rosenbrock function
# We can now use `gdoptimize` with `BackTracking` to optimize the Rosenbrock function
# from a given initial condition `x0`.

x0 = [-1., 1.0]
Expand Down
40 changes: 40 additions & 0 deletions docs/src/examples/optim_initialstep.jl
@@ -0,0 +1,40 @@
# # Optim initial step length guess
#
#src TODO: Find a way to run these with Literate when deploying via Travis
#src TODO: This file must currently be run locally and not on CI, and then
#src TODO: the md file must be copied over to the correct directory.
#src TODO: The reason is that there may be breaking changes between Optim and LineSearches,
#src TODO: so we don't want that to mess up JuliaCIBot
#-
#-
#md # !!! tip
#md # This example is also available as a Jupyter notebook:
#md # [`optim_initialstep.ipynb`](@__NBVIEWER_ROOT_URL__examples/generated/optim_initialstep.ipynb)
#-
#
# This example shows how to use the initial step length procedures
# with [Optim](https://github.com/JuliaNLSolvers/Optim.jl). We solve
# the Rosenbrock problem with two different procedures.
#
# First, run `Newton` with the (default) initial guess and line search procedures.
using Optim, LineSearches
import OptimTestProblems.MultivariateProblems
UP = MultivariateProblems.UnconstrainedProblems
prob = UP.examples["Rosenbrock"]

algo_st = Newton(alphaguess = InitialStatic(), linesearch = HagerZhang())
res_st = Optim.optimize(prob.f, prob.g!, prob.h!, prob.initial_x, method=algo_st)


# We can now try with the initial step length guess from Hager and Zhang.
algo_hz = Newton(alphaguess = InitialHagerZhang(α0=1.0), linesearch = HagerZhang())
res_hz = Optim.optimize(prob.f, prob.g!, prob.h!, prob.initial_x, method=algo_hz)


# From the result we see that this has reduced the number of function and gradient calls, but increased the number of iterations.

## Test the results #src
using Base.Test #src
@test Optim.f_calls(res_hz) < Optim.f_calls(res_st) #src
@test Optim.g_calls(res_hz) < Optim.g_calls(res_st) #src
@test Optim.iterations(res_hz) > Optim.iterations(res_st) #src
38 changes: 38 additions & 0 deletions docs/src/examples/optim_linesearch.jl
@@ -0,0 +1,38 @@
# # Optim line search
#
#src TODO: Find a way to run these with Literate when deploying via Travis
#src TODO: This file must currently be run locally and not on CI, and then
#src TODO: the md file must be copied over to the correct directory.
#src TODO: The reason is that there may be breaking changes between Optim and LineSearches,
#src TODO: so we don't want that to mess up JuliaCIBot
#-
#md # !!! tip
#md # This example is also available as a Jupyter notebook:
#md # [`optim_linesearch.ipynb`](@__NBVIEWER_ROOT_URL__examples/generated/optim_linesearch.ipynb)
#-
#
# This example shows how to use `LineSearches` with
# [Optim](https://github.com/JuliaNLSolvers/Optim.jl). We solve the
# Rosenbrock problem with two different line search algorithms.
#
# First, run `Newton` with the default line search algorithm:

using Optim, LineSearches
import OptimTestProblems.MultivariateProblems
UP = MultivariateProblems.UnconstrainedProblems
prob = UP.examples["Rosenbrock"]

algo_hz = Newton(linesearch = HagerZhang())
res_hz = Optim.optimize(prob.f, prob.g!, prob.h!, prob.initial_x, method=algo_hz)

# Now we can try `Newton` with the cubic backtracking line search,
# which reduced the number of objective and gradient calls.

algo_bt3 = Newton(linesearch = BackTracking(order=3))
res_bt3 = Optim.optimize(prob.f, prob.g!, prob.h!, prob.initial_x, method=algo_bt3)


## Test the results #src
using Base.Test #src
@test Optim.f_calls(res_bt3) < Optim.f_calls(res_hz) #src
@test Optim.g_calls(res_bt3) < Optim.g_calls(res_hz) #src
35 changes: 32 additions & 3 deletions docs/src/index.md
Expand Up @@ -7,11 +7,33 @@ DocTestSetup = :(using LineSearches)

## Introduction
`LineSearches` provides a collection of line search routines for
optimization and nonlinear solvers.
The package can be used on its own, but it also provides extra
supporting functionality for `Optim.jl` and `NLsolve.jl`.
optimization and nonlinear solvers. The package can be used on its
own, but it also provides extra supporting functionality for
[Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) and
[NLsolve.jl](https://github.com/JuliaNLSolvers/NLsolve.jl).


## Available line search algorithms
* `HagerZhang` (Taken from the Conjugate Gradient implementation
by Hager and Zhang, 2006)
* `MoreThuente` (From the algorithm in More and Thuente, 1994)
* `BackTracking` (Described in Nocedal and Wright, 2006)
* `StrongWolfe` (Nocedal and Wright)
* `Static` (Takes the proposed initial step length.)

## Available initial step length procedures
The package provides some procedures to calculate the initial step
length that is passed to the line search algorithm, currently specialized to
be used with Optim and NLsolve.
* `InitialPrevious` (Use the step length from the previous
optimization iteration)
* `InitialStatic` (Use the same initial step length each time)
* `InitialHagerZhang` (Taken from Hager and Zhang, 2006)
* `InitialQuadratic` (Propose initial step length based on a quadratic
interpolation)
* `InitialConstantChange` (Propose initial step length assuming
constant change in step length)

## Installation

To install, simply run the following in the Julia REPL:
Expand All @@ -23,3 +45,10 @@ and then run
using LineSearches
```
to load the package.


## References

- W. W. Hager and H. Zhang (2006) "Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent." ACM Transactions on Mathematical Software 32: 113-137.
- Moré, Jorge J., and David J. Thuente. "Line search algorithms with guaranteed sufficient decrease." ACM Transactions on Mathematical Software (TOMS) 20.3 (1994): 286-307.
- Nocedal, Jorge, and Stephen Wright. "Numerical optimization." Springer Science & Business Media, 2006.
13 changes: 12 additions & 1 deletion test/examples.jl
@@ -1,6 +1,17 @@
@testset "Literate examples" begin
# We shouldn't run the examples that require Optim in Travis/CI,
# because an update in LineSearches may be breaking with the
# most recently tagged Optim version.
if get(ENV, "CI", "") == "true"
SKIPFILE = ["optim_linesearch.jl", "optim_initialstep.jl"]
else
SKIPFILE = ["",]
end

EXAMPLEDIR = joinpath(@__DIR__, "../docs/src/examples")
for file in filter!(r"\.jl$", readdir(EXAMPLEDIR))

myfilter(str) = r"\.jl$"(str) && !(str in SKIPFILE)
for file in filter!(myfilter, readdir(EXAMPLEDIR))
@testset "$file" begin
mktempdir() do dir
cd(dir) do
Expand Down

0 comments on commit ae37e05

Please sign in to comment.