Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 27 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,21 @@

# ReservoirComputing.jl

ReservoirComputing.jl provides an efficient, modular and easy to use implementation of Reservoir Computing models such as Echo State Networks (ESNs). For information on using this package please refer to the [stable documentation](https://docs.sciml.ai/ReservoirComputing/stable/). Use the [in-development documentation](https://docs.sciml.ai/ReservoirComputing/dev/) to take a look at at not yet released features.
ReservoirComputing.jl provides an efficient, modular and easy to use
implementation of Reservoir Computing models such as Echo State Networks (ESNs).
For information on using this package please refer to the
[stable documentation](https://docs.sciml.ai/ReservoirComputing/stable/).
Use the
[in-development documentation](https://docs.sciml.ai/ReservoirComputing/dev/)
to take a look at not yet released features.

## Quick Example

To illustrate the workflow of this library we will showcase how it is possible to train an ESN to learn the dynamics of the Lorenz system. As a first step we will need to gather the data. For the `Generative` prediction we need the target data to be one step ahead of the training data:
To illustrate the workflow of this library we will showcase
how it is possible to train an ESN to learn the dynamics of the
Lorenz system. As a first step we gather the data.
For the `Generative` prediction we need the target data
to be one step ahead of the training data:

```julia
using ReservoirComputing, OrdinaryDiffEq
Expand Down Expand Up @@ -52,7 +62,9 @@ target_data = data[:, (shift + 1):(shift + train_len)]
test = data[:, (shift + train_len):(shift + train_len + predict_len - 1)]
```

Now that we have the data we can initialize the ESN with the chosen parameters. Given that this is a quick example we are going to change the least amount of possible parameters. For more detailed examples and explanations of the functions please refer to the documentation.
Now that we have the data we can initialize the ESN with the chosen parameters.
Given that this is a quick example we are going to change the least amount of
possible parameters:

```julia
input_size = 3
Expand All @@ -63,14 +75,17 @@ esn = ESN(input_data, input_size, res_size;
nla_type=NLAT2())
```

The echo state network can now be trained and tested. If not specified, the training will always be ordinary least squares regression. The full range of training methods is detailed in the documentation.
The echo state network can now be trained and tested.
If not specified, the training will always be ordinary least squares regression:

```julia
output_layer = train(esn, target_data)
output = esn(Generative(predict_len), output_layer)
```

The data is returned as a matrix, `output` in the code above, that contains the predicted trajectories. The results can now be easily plotted (for the actual script used to obtain this plot please refer to the documentation):
The data is returned as a matrix, `output` in the code above,
that contains the predicted trajectories.
The results can now be easily plotted:

```julia
using Plots
Expand All @@ -80,7 +95,8 @@ plot!(transpose(test); layout=(3, 1), label="actual")

![lorenz_basic](https://user-images.githubusercontent.com/10376688/166227371-8bffa318-5c49-401f-9c64-9c71980cb3f7.png)

One can also visualize the phase space of the attractor and the comparison with the actual one:
One can also visualize the phase space of the attractor and the
comparison with the actual one:

```julia
plot(transpose(output)[:, 1],
Expand Down Expand Up @@ -111,4 +127,8 @@ If you use this library in your work, please cite:

## Acknowledgements

This project was possible thanks to initial funding through the [Google summer of code](https://summerofcode.withgoogle.com/) 2020 program. Francesco M. further acknowledges [ScaDS.AI](https://scads.ai/) and [RSC4Earth](https://rsc4earth.de/) for supporting the current progress on the library.
This project was possible thanks to initial funding through
the [Google summer of code](https://summerofcode.withgoogle.com/)
2020 program. Francesco M. further acknowledges [ScaDS.AI](https://scads.ai/)
and [RSC4Earth](https://rsc4earth.de/) for supporting the current progress
on the library.
81 changes: 62 additions & 19 deletions src/esn/esn_inits.jl
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@ a range defined by `scaling`.
- `T`: Type of the elements in the reservoir matrix.
Default is `Float32`.
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.

# Keyword arguments

- `scaling`: A scaling factor to define the range of the uniform distribution.
The matrix elements will be randomly chosen from the
range `[-scaling, scaling]`. Defaults to `0.1`.
Expand Down Expand Up @@ -55,6 +58,9 @@ elements distributed uniformly within the range [-`scaling`, `scaling`] [^Lu2017
- `T`: Type of the elements in the reservoir matrix.
Default is `Float32`.
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.

# Keyword arguments

- `scaling`: The scaling factor for the weight distribution.
Defaults to `0.1`.
- `return_sparse`: flag for returning a `sparse` matrix.
Expand Down Expand Up @@ -106,6 +112,9 @@ Create an input layer for informed echo state networks [^Pathak2018].
- `T`: Type of the elements in the reservoir matrix.
Default is `Float32`.
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.

# Keyword arguments

- `scaling`: The scaling factor for the input matrix.
Default is 0.1.
- `model_in_size`: The size of the input model.
Expand Down Expand Up @@ -167,6 +176,9 @@ is randomly determined by the `sampling` chosen.
- `T`: Type of the elements in the reservoir matrix.
Default is `Float32`.
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.

# Keyword arguments

- `weight`: The weight used to fill the layer matrix. Default is 0.1.
- `sampling_type`: The sampling parameters used to generate the input matrix.
Default is `:bernoulli`.
Expand Down Expand Up @@ -239,7 +251,10 @@ function minimal_init(rng::AbstractRNG, ::Type{T}, dims::Integer...;
rng,
T)
else
error("Sampling type not allowed. Please use one of :bernoulli or :irrational")
error("""\n
Sampling type not allowed.
Please use one of :bernoulli or :irrational\n
""")
end
return layer_matrix
end
Expand Down Expand Up @@ -282,7 +297,7 @@ function _create_irrational(irrational::Irrational, start::Int, res_size::Int,
return T.(input_matrix)
end

"""
@doc raw"""
chebyshev_mapping([rng], [T], dims...;
amplitude=one(T), sine_divisor=one(T),
chebyshev_parameter=one(T), return_sparse=true)
Expand All @@ -292,14 +307,15 @@ using a sine function and subsequent rows are iteratively generated
via the Chebyshev mapping. The first row is defined as:

```math
w(1, j) = amplitude * sin(j * π / (sine_divisor * n_cols))
W[1, j] = \text{amplitude} \cdot \sin(j \cdot \pi / (\text{sine_divisor}
\cdot \text{n_cols}))
```

for j = 1, 2, …, n_cols (with n_cols typically equal to K+1, where K is the number of input layer neurons).
Subsequent rows are generated by applying the mapping:

```math
w(i+1, j) = cos(chebyshev_parameter * acos(w(i, j)))
W[i+1, j] = \cos( \text{chebyshev_parameter} \cdot \acos(W[pi, j]))
```

# Arguments
Expand Down Expand Up @@ -364,22 +380,23 @@ function chebyshev_mapping(rng::AbstractRNG, ::Type{T}, dims::Integer...;
end

@doc raw"""
logistic_mapping(rng::AbstractRNG, ::Type{T}, dims::Integer...;
amplitude=0.3, sine_divisor=5.9, logistic_parameter = 3.7,
logistic_mapping([rng], [T], dims...;
amplitude=0.3, sine_divisor=5.9, logistic_parameter=3.7,
return_sparse=true)

Generate an input weight matrix using a logistic mapping [^wang2022].The first
row is initialized using a sine function:

```math
W(1, j) = amplitude * sin(j * π / (sine_divisor * in_size))
W[1, j] = \text{amplitude} \cdot \sin(j \cdot \pi /
(\text{sine_divisor} \cdot in_size))
```

for each input index `j`, with `in_size` being the number of columns provided in `dims`. Subsequent rows
are generated recursively using the logistic map recurrence:

```math
W(i+1, j) = logistic_parameter * W(i, j) * (1 - W(i, j))
W[i+1, j] = \text{logistic_parameter} \cdot W(i, j) \cdot (1 - W[i, j])
```

# Arguments
Expand All @@ -389,7 +406,8 @@ are generated recursively using the logistic map recurrence:
Default is `Float32`.
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.

# keyword arguments
# Keyword arguments

- `amplitude`: Scaling parameter used in the sine initialization of the
first row. Default is 0.3.
- `sine_divisor`: Parameter used to adjust the phase in the sine initialization.
Expand Down Expand Up @@ -452,14 +470,15 @@ as follows:
- The first element of the chain is initialized using a sine function:

```math
W(1,j) = amplitude * sin( (j * π) / (factor * n * sine_divisor) )
W[1,j] = \text{amplitude} \cdot \sin( (j \cdot \pi) /
(\text{factor} \cdot \text{n} \cdot \text{sine_divisor}) )
```
where `j` is the index corresponding to the input and `n` is the number of inputs.

- Subsequent elements are recursively computed using the logistic mapping:

```math
W(i+1,j) = logistic_parameter * W(i,j) * (1 - W(i,j))
W[i+1,j] = \text{logistic_parameter} \cdot W[i,j] \cdot (1 - W[i,j])
```

The resulting matrix has dimensions `(factor * in_size) x in_size`, where
Expand All @@ -474,7 +493,8 @@ the number of rows is overridden.
Default is `Float32`.
- `dims`: Dimensions of the matrix. Should follow `res_size x in_size`.

# keyword arguments
# Keyword arguments

- `factor`: The number of logistic map iterations (chain length) per input,
determining the number of rows per input.
- `amplitude`: Scaling parameter A for the sine-based initialization of
Expand Down Expand Up @@ -563,6 +583,9 @@ and scaled spectral radius according to `radius`.
- `T`: Type of the elements in the reservoir matrix.
Default is `Float32`.
- `dims`: Dimensions of the reservoir matrix.

# Keyword arguments

- `radius`: The desired spectral radius of the reservoir.
Defaults to 1.0.
- `sparsity`: The sparsity level of the reservoir matrix,
Expand Down Expand Up @@ -590,7 +613,10 @@ function rand_sparse(rng::AbstractRNG, ::Type{T}, dims::Integer...;
rho_w = maximum(abs.(eigvals(reservoir_matrix)))
reservoir_matrix .*= radius / rho_w
if Inf in unique(reservoir_matrix) || -Inf in unique(reservoir_matrix)
error("Sparsity too low for size of the matrix. Increase res_size or increase sparsity")
error("""\n
Sparsity too low for size of the matrix.
Increase res_size or increase sparsity.\n
""")
end

return return_sparse ? sparse(reservoir_matrix) : reservoir_matrix
Expand All @@ -609,6 +635,9 @@ Create and return a delay line reservoir matrix [^Rodan2010].
- `T`: Type of the elements in the reservoir matrix.
Default is `Float32`.
- `dims`: Dimensions of the reservoir matrix.

# Keyword arguments

- `weight`: Determines the value of all connections in the reservoir.
Default is 0.1.
- `return_sparse`: flag for returning a `sparse` matrix.
Expand Down Expand Up @@ -640,8 +669,10 @@ julia> res_matrix = delay_line(5, 5; weight=1)
function delay_line(rng::AbstractRNG, ::Type{T}, dims::Integer...;
weight=T(0.1), return_sparse::Bool=true) where {T <: Number}
reservoir_matrix = DeviceAgnostic.zeros(rng, T, dims...)
@assert length(dims) == 2&&dims[1] == dims[2] "The dimensions
must define a square matrix (e.g., (100, 100))"
@assert length(dims) == 2&&dims[1] == dims[2] """\n
The dimensions must define a square matrix
(e.g., (100, 100))
"""

for i in 1:(dims[1] - 1)
reservoir_matrix[i + 1, i] = weight
Expand All @@ -652,7 +683,7 @@ end

"""
delay_line_backward([rng], [T], dims...;
weight = 0.1, fb_weight = 0.2, return_sparse=true)
weight=0.1, fb_weight=0.2, return_sparse=true)

Create a delay line backward reservoir with the specified by `dims` and weights.
Creates a matrix with backward connections as described in [^Rodan2010].
Expand All @@ -664,6 +695,9 @@ Creates a matrix with backward connections as described in [^Rodan2010].
- `T`: Type of the elements in the reservoir matrix.
Default is `Float32`.
- `dims`: Dimensions of the reservoir matrix.

# Keyword arguments

- `weight`: The weight determines the absolute value of
forward connections in the reservoir. Default is 0.1
- `fb_weight`: Determines the absolute value of backward connections
Expand Down Expand Up @@ -709,7 +743,7 @@ end

"""
cycle_jumps([rng], [T], dims...;
cycle_weight = 0.1, jump_weight = 0.1, jump_size = 3, return_sparse=true)
cycle_weight=0.1, jump_weight=0.1, jump_size=3, return_sparse=true)

Create a cycle jumps reservoir with the specified dimensions,
cycle weight, jump weight, and jump size.
Expand All @@ -721,6 +755,9 @@ cycle weight, jump weight, and jump size.
- `T`: Type of the elements in the reservoir matrix.
Default is `Float32`.
- `dims`: Dimensions of the reservoir matrix.

# Keyword arguments

- `cycle_weight`: The weight of cycle connections.
Default is 0.1.
- `jump_weight`: The weight of jump connections.
Expand Down Expand Up @@ -779,7 +816,7 @@ end

"""
simple_cycle([rng], [T], dims...;
weight = 0.1, return_sparse=true)
weight=0.1, return_sparse=true)

Create a simple cycle reservoir with the specified dimensions and weight.

Expand All @@ -789,6 +826,9 @@ Create a simple cycle reservoir with the specified dimensions and weight.
from WeightInitializers.
- `T`: Type of the elements in the reservoir matrix. Default is `Float32`.
- `dims`: Dimensions of the reservoir matrix.

# Keyword arguments

- `weight`: Weight of the connections in the reservoir matrix.
Default is 0.1.
- `return_sparse`: flag for returning a `sparse` matrix.
Expand Down Expand Up @@ -831,7 +871,7 @@ end

"""
pseudo_svd([rng], [T], dims...;
max_value=1.0, sparsity=0.1, sorted = true, reverse_sort = false,
max_value=1.0, sparsity=0.1, sorted=true, reverse_sort=false,
return_sparse=true)

Returns an initializer to build a sparse reservoir matrix with the given
Expand All @@ -844,6 +884,9 @@ Returns an initializer to build a sparse reservoir matrix with the given
- `T`: Type of the elements in the reservoir matrix.
Default is `Float32`.
- `dims`: Dimensions of the reservoir matrix.

# Keyword arguments

- `max_value`: The maximum absolute value of elements in the matrix.
Default is 1.0
- `sparsity`: The desired sparsity level of the reservoir matrix.
Expand Down
Loading