Skip to content

Commit

Permalink
Update documentation (#157)
Browse files Browse the repository at this point in the history
* Remove section about benchmarking

* WIP on documentation

* Fix typo

* Remove remainder of plot keyword

* Chebyshev & power method

* Uncomment deploy command

* A -> B

* Move things to separate folders and added the remaining methods in separate files

* More docs

* Add the iterator page

* Update docs for stationary methods

* ::Int consistency

* Typo in IDR(s) docs

* Add notes on underlying iterators

* for _ = _ to for _ in _
  • Loading branch information
haampie authored and andreasnoack committed Aug 21, 2017
1 parent 29ba93b commit db0702b
Show file tree
Hide file tree
Showing 38 changed files with 1,309 additions and 1,927 deletions.
32 changes: 25 additions & 7 deletions docs/make.jl
Expand Up @@ -6,14 +6,32 @@ makedocs(
sitename = "IterativeSolvers.jl",
pages = [
"Home" => "index.md",
"Manual" => "user_manual.md",
"Library" => [
"Public" => "library/public.md",
"Internal" => "library/internal.md",
],
"Getting started" => "getting_started.md",
"Preconditioning" => "preconditioning.md",
"Linear systems" => [
"Conjugate Gradients" => "linear_systems/cg.md",
"Chebyshev iteration" => "linear_systems/chebyshev.md",
"MINRES" => "linear_systems/minres.md",
"BiCGStab(l)" => "linear_systems/bicgstabl.md",
"IDR(s)" => "linear_systems/idrs.md",
"Restarted GMRES" => "linear_systems/gmres.md",
"LSMR" => "linear_systems/lsmr.md",
"LSQR" => "linear_systems/lsqr.md",
"Stationary methods" => "linear_systems/stationary.md"
],
"Eigenproblems" => [
"Power method" => "eigenproblems/power_method.md",
],
"SVDL" => "svd/svdl.md",
"Randomized algorithms" => "randomized.md",
"The iterator approach" => "iterators.md",
# "Additional resources" => [
# # "Public" => "library/public.md",
# # "Internal" => "library/internal.md",
# ],
"About" => [
"Contributing" => "about/CONTRIBUTING.md",
"License" => "about/license.md",
"Contributing" => "about/CONTRIBUTING.md",
"License" => "about/license.md",
]
]
)
Expand Down
84 changes: 1 addition & 83 deletions docs/src/about/CONTRIBUTING.md
@@ -1,7 +1,7 @@
# Contributing

Contributions are always welcome, as are feature requests and suggestions. Feel
free to open [issues](https://help.github.com/articles/creating-an-issue/) and [pull requests](https://help.github.com/articles/creating-a-pull-request/) at any time. If you aren't familiar with git or Github please start [now](https://help.github.com/articles/good-resources-for-learning-git-and-github/).
free to open an [issue](https://github.com/JuliaMath/IterativeSolvers.jl/issues) or [pull request](https://github.com/JuliaMath/IterativeSolvers.jl/pulls) at any time.

It is important to note that almost every method in the package has documentation,
to know what it does simply use `?<method>` in the terminal.
Expand Down Expand Up @@ -79,85 +79,3 @@ A more detailed explanation of all the functions is in both the public and inter
documentation of `ConvergenceHistory`.

The most rich example of the usage of `ConvergenceHistory` is in `svdl`.


## Adding benchmarks

The [Benchmarks](@ref) tab of the documentation is built automatically with Travis.
Any benchmark added will be displayed automatically after a successful pull request.

The benchmark suite gets built doing a cross product between the available matrices
and available methods, if there are `n` methods and `m` linear operators then `n*m` will be
the upper limit of benchmarks to be made. Some methods are not compatible with certain
matrices, to avoid generating unnecessary benchmarks each method and matrix has
traits, linear operator traits are inspired from [MatrixDepot.jl](http://matrixdepotjl.readthedocs.io/en/latest/properties.html).

**Method traits**

* accessible : Method accesses the linear operator's fields.
* inverse : `A`'s Inverse must exist.
* symmetric : `A`'s must be symmetric.
* pos-def : `A`'s must be definite.

**Linear Operator traits**

* accessible : Is accessible.
* inverse : `A` is exist.
* symmetric : `A` is symmetric.
* pos-def : `A` is definite.
* eigen : Part of the eigensystem of the matrix is explicitly known.
* graph : An adjacency matrix of a graph.
* ill-cond : The matrix is ill-conditioned for some parameter values.
* random : The matrix has random entries.
* regprob : The output is a test problem for Regularization Methods.
* sparse : The matrix is sparse.

A benchmark between a method and a linear operator will be made if and only if
the traits of the method is subset of the traits of the linear operator.

Benchmarks are stored in [Benchmarks.jl](https://github.com/JuliaLang/IterativeSolvers.jl/tree/master/benchmark/Benchmarks.jl).
To add a method use `addEqMethod`.

```julia
addEqMethod(methods, "jacobi", jacobi, ["inverse","accessible"])
addEqMethod(methods, "gauss_seidel", gauss_seidel, ["inverse","accessible"])
addEqMethod(methods, "sor", sor, ["inverse","accessible"])
addEqMethod(methods, "ssor", ssor, ["inverse","accessible", "symmetric"])
addEqMethod(methods, "cg", cg, ["inverse", "symmetric", "pos-def"])
addEqMethod(methods, "gmres", gmres, ["inverse"])
addEqMethod(methods, "lsqr", lsqr, ["inverse"])
addEqMethod(methods, "chebyshev", chebyshev, ["inverse", "accessible"])
```

Here `methods` is a dictionary, the second argument is the name to be displayed in
the benchmarks, the third argument is the function and the fourth is the traits.
Every function has a predetermined call in `buildCall` function.

To add an equation use `addEquation`.

```julia
#Sparse matrix equations
addEquation(
equations, "Poisson", ["Sparse", "Poisson"],
["sparse","inverse", "symmetric", "pos-def", "eigen", "accessible"],
:(matrixdepot("poisson",4))
)

#Function matrix equations
addEquation(
equations, "SOLtest", ["Function", "SOLtest"],
["function","inverse"],
:(buildSol(10)),
10
)
```

Here `equations` is a dictionary, the second argument is the name to be displayed in
the benchmarks, the third argument is the path inside the `BenchmarkGroup` type
the fourth argument is the traits, the fifth is the matrix generator and
the sixth is the size of the matrix. The size of the matrix has to be passed when
it is impossible to deduce the dimension from the generator, in this case buildSol
generates a function and not a matrix.

To add a custom benchmark use directly the `suite` variable which is the `BenchmarkGroup`
of the package, to know more of this type check [BenchmarkTools.jl](https://github.com/JuliaCI/BenchmarkTools.jl).
15 changes: 15 additions & 0 deletions docs/src/eigenproblems/power_method.md
@@ -0,0 +1,15 @@
# [(Inverse) power method](@id PowerMethod)

Solves the eigenproblem $Ax = λx$ approximately where $A$ is a general linear map. By default converges towards the dominant eigenpair $(λ, x)$ such that $|λ|$ is largest. Shift-and-invert can be applied to target a specific eigenvalue near `shift` in the complex plane.

## Usage

```@docs
powm
powm!
invpowm
invpowm!
```

## Implementation details
Storage requirements are 3 vectors: the approximate eigenvector `x`, the residual vector `r` and a temporary. The residual norm lags behind one iteration, as it is computed when $Ax$ is performed. Therefore the final resdiual norm is even smaller.
126 changes: 126 additions & 0 deletions docs/src/getting_started.md
@@ -0,0 +1,126 @@
# Getting started

## Installation

The package can be installed via Julia's package manager.

```julia
julia> Pkg.add("IterativeSolvers")
```

## Interface

Virtually all solvers have the common function declarations:

```julia
solver(A, args...; kwargs...)
solver!(x, A, args...; kwargs...)
```

where `A` is a [linear operator](@ref matrixfree) and `x` an initial guess. The second declaration also updates `x` in-place.

### [Explicit matrices and the matrix-free approach](@id matrixfree)
Rather than constructing an explicit matrix `A` of the type `Matrix` or `SparseMatrixCSC`, it is also possible to pass a general linear operator that performs matrix operations implicitly. This is called the **matrix-free** approach.

For matrix-free types of `A` the following interface is expected to be defined:

- `A*v` computes the matrix-vector product on a `v::AbstractVector`;
- `A_mul_B!(y, A, v)` computes the matrix-vector product on a `v::AbstractVector` in-place;
- `eltype(A)` returns the element type implicit in the equivalent matrix representation of `A`;
- `size(A, d)` returns the nominal dimensions along the `d`th axis in the equivalent matrix representation of `A`.

!!! tip "Matrix-free with LinearMaps.jl"
We strongly recommend [LinearMaps.jl](https://github.com/Jutho/LinearMaps.jl) for matrix-free linear operators, as it implements the above methods already for you; you just have to write the action of the linear map.


### Additional arguments

Keyword names will vary depending on the method, however some of them will always have the same spelling:

- `tol`: (relative) stopping tolerance of the method;
- `verbose`: print information during the iterations;
- `maxiter`: maximum number of allowed iterations;
- `Pl` and `Pr`: left and right preconditioner. See [Preconditioning](@ref Preconditioning);
- `log::Bool = false`: output an extra element of type `ConvergenceHistory` containing the convergence history.

### `log` keyword

Most solvers contain the `log` keyword. This is to be used when obtaining
more information is required, to use it place the set `log` to `true`.

```julia
x, ch = cg(Master, rand(10, 10), rand(10) log=true)
svd, L, ch = svdl(Master, rand(100, 100), log=true)
```

The function will now return one more parameter of type `ConvergenceHistory`.

## ConvergenceHistory

A [`ConvergenceHistory`](@ref) instance stores information of a solver.

Number of iterations.

```julia
ch.iters
```

Convergence status.

```julia
ch.isconverged
```

Stopping tolerances. (A `Symbol` key is needed to access)

```julia
ch[:tol]
```

Maximum number of iterations per restart. (Only on restarted methods)

```julia
nrests(ch)
```

Number of matrix-vectors and matrix-transposed-vector products.

```julia
nprods(ch)
```

Data stored on each iteration, accessed information can be either a vector
or matrix. This data can be a lot of things, most commonly residual.
(A `Symbol` key is needed to access)

```julia
ch[:resnorm] #Vector or Matrix
ch[:resnorm, x] #Vector or Matrix element
ch[:resnorm, x, y] #Matrix element
```

```@docs
ConvergenceHistory
```

### Plotting

`ConvergeHistory` provides a recipe to use with the package [Plots.jl](https://github.com/tbreloff/Plots.jl), this makes it really easy to
plot on different plot backends. There are two recipes provided:

One for the whole `ConvergenceHistory`.

```julia
plot(ch)
```

The other one to plot data binded to a key.

```julia
_, ch = gmres(rand(10,10), rand(10), maxiter = 100, log=true)
plot(ch, :resnorm, sep = :blue)
```

*Plot additional keywords*

`sep::Symbol = :white`: color of the line separator in restarted methods.
94 changes: 20 additions & 74 deletions docs/src/index.md
@@ -1,91 +1,37 @@
# IterativeSolvers.jl

IterativeSolvers.jl is a Julia package that provides iterative algorithms for
solving linear systems, eigensystems, and singular value problems. The purpose
of this package is to provide efficient Julia implementations for iterative
methods. The package aims to accept a wide variety of input types and that's
why most arguments don't specify a specific type, however this is still in
progress.
IterativeSolvers.jl is a Julia package that provides efficient iterative algorithms for solving large linear systems, eigenproblems, and singular value problems. Most of the methods can be used *matrix-free*.

For bug reports, feature requests and questions please submit an issue.
If you're interested in contributing, please see the [Contributing](@ref) guide.
For bug reports, feature requests and questions please submit an issue. If you're interested in contributing, please see the [Contributing](@ref) guide.

For more information on future methods have a look at the package [roadmap](https://github.com/JuliaLang/IterativeSolvers.jl/issues/1) on deterministic methods, for randomized algorithms check [here](https://github.com/JuliaLang/IterativeSolvers.jl/issues/33).

## Linear Solvers
## What method should I use for linear systems?

**Stationary methods**
When solving linear systems $Ax = b$ for a square matrix $A$ there are quite some options. The typical choices are listed below:

* Jacobi
* Gauss-Seidel
* Successive over relaxation
* Symmetric successive over relaxation
| Method | When to use it |
|---------------------|--------------------------------------------------------------------------|
| [Conjugate Gradients](@ref CG) | Best choice for **symmetric**, **positive-definite** matrices |
| [MINRES](@ref MINRES) | For **symmetric**, **indefinite** matrices |
| [GMRES](@ref GMRES) | For **nonsymmetric** matrices when a good [preconditioner](@ref Preconditioning) is available |
| [IDR(s)](@ref IDRs) | For **nonsymmetric**, **strongly indefinite** problems without a good preconditioner |
| [BiCGStab(l)](@ref BiCGStabl) | Otherwise for **nonsymmetric** problems |

**Non stationary methods**
We also offer [Chebyshev iteration](@ref Chebyshev) as an alternative to Conjugate Gradients when bounds on the spectrum are known.

* IDRS
* LSMR
* LSQR
* Conjugate gradients (CG)
* Chebyshev iteration
* Generalized minimal residual method (with restarts) (GMRES)
Stationary methods like [Jacobi](@ref), [Gauss-Seidel](@ref), [SOR](@ref) and [SSOR](@ref) can be used as smoothers to reduce high-frequency components in the error in just a few iterations.

## Eigen Solvers
When solving **least-squares** problems we currently offer just [LSMR](@ref LSMR) and [LSQR](@ref LSQR).

*Simple eigenpair iterations*
## Eigenproblems and SVD

* Power iteration
* Inverse power iteration
For the Singular Value Decomposition we offer [SVDL](@ref SVDL), which is the Golub-Kahan-Lanczos procedure.

**Hermitian**
For eigenvalue problems we have at this point just the [Power Method](@ref PowerMethod) and some convenience wrappers to do shift-and-invert.

*Lanczos*
## Randomized algorithms

* Simple Lanczos
[Randomized algorithms](@ref Randomized) have gotten some traction lately. Some of the methods mentioned in [^Halko2011] have been implemented as well, although their quality is generally poor. Also note that many classical methods such as the subspace iteration, BiCG and recent methods like IDR(s) are also "randomized" in some sense.

## Singular Value Decomposition

* Golub-Kahan-Lanczos

## Randomized

* Condition number estimate
* Extremal eigenvalue estimates
* Norm estimate
* Randomized singular value decomposition



## Documentation Outline

### Manual

```@contents
Pages = [
"user_manual.md",
]
Depth = 2
```

### Library

```@contents
Pages = ["library/public.md", "library/internal.md"]
Depth = 2
```

### [Index](@id main-index)

### Functions

```@index
Pages = ["library/public.md", "library/internals.md"]
Order = [:function]
```

### Types

```@index
Pages = ["library/public.md", "library/internals.md"]
Order = [:type]
```
[^Halko2011]: Halko, Nathan, Per-Gunnar Martinsson, and Joel A. Tropp. "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions." SIAM review 53.2 (2011): 217-288.

0 comments on commit db0702b

Please sign in to comment.