diff --git a/README.md b/README.md
index 23b51ad7..82154140 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,8 @@
-# ReservoirComputing.jl
+
+
+
+
+
[](https://julialang.zulipchat.com/#narrow/stream/279055-sciml-bridged)
[](https://docs.sciml.ai/ReservoirComputing/stable/)
@@ -11,8 +15,9 @@
[](https://github.com/SciML/ColPrac)
[](https://github.com/SciML/SciMLStyle)
-
+
+# ReservoirComputing.jl
ReservoirComputing.jl provides an efficient, modular and easy to use implementation of Reservoir Computing models such as Echo State Networks (ESNs). For information on using this package please refer to the [stable documentation](https://docs.sciml.ai/ReservoirComputing/stable/). Use the [in-development documentation](https://docs.sciml.ai/ReservoirComputing/dev/) to take a look at at not yet released features.
## Quick Example
diff --git a/docs/src/esn_tutorials/change_layers.md b/docs/src/esn_tutorials/change_layers.md
index 906d693b..f10c869e 100644
--- a/docs/src/esn_tutorials/change_layers.md
+++ b/docs/src/esn_tutorials/change_layers.md
@@ -7,7 +7,9 @@ weights = init(rng, dims...)
#rng is optional
weights = init(dims...)
```
+
Additional keywords can be added when needed:
+
```julia
weights_init = init(rng; kwargs...)
weights = weights_init(rng, dims...)
@@ -32,26 +34,27 @@ predict_len = 2000
ds = Systems.henon()
traj, t = trajectory(ds, 7000)
data = Matrix(traj)'
-data = (data .-0.5) .* 2
+data = (data .- 0.5) .* 2
shift = 200
-training_input = data[:, shift:shift+train_len-1]
-training_target = data[:, shift+1:shift+train_len]
-testing_input = data[:,shift+train_len:shift+train_len+predict_len-1]
-testing_target = data[:,shift+train_len+1:shift+train_len+predict_len]
+training_input = data[:, shift:(shift + train_len - 1)]
+training_target = data[:, (shift + 1):(shift + train_len)]
+testing_input = data[:, (shift + train_len):(shift + train_len + predict_len - 1)]
+testing_target = data[:, (shift + train_len + 1):(shift + train_len + predict_len)]
```
+
Now it is possible to define the input layers and reservoirs we want to compare and run the comparison in a simple for loop. The accuracy will be tested using the mean squared deviation msd from StatsBase.
```@example minesn
using ReservoirComputing, StatsBase
res_size = 300
-input_layer = [minimal_init(; weight = 0.85, sampling_type=:irrational),
- minimal_init(; weight = 0.95, sampling_type=:irrational)]
-reservoirs = [simple_cycle(; weight=0.7),
- cycle_jumps(; cycle_weight=0.7, jump_weight=0.2, jump_size=5)]
+input_layer = [minimal_init(; weight = 0.85, sampling_type = :irrational),
+ minimal_init(; weight = 0.95, sampling_type = :irrational)]
+reservoirs = [simple_cycle(; weight = 0.7),
+ cycle_jumps(; cycle_weight = 0.7, jump_weight = 0.2, jump_size = 5)]
-for i=1:length(reservoirs)
+for i in 1:length(reservoirs)
esn = ESN(training_input, 2, res_size;
input_layer = input_layer[i],
reservoir = reservoirs[i])
@@ -60,9 +63,10 @@ for i=1:length(reservoirs)
println(msd(testing_target, output))
end
```
+
As it is possible to see, changing layers in ESN models is straightforward. Be sure to check the API documentation for a full list of reservoir and layers.
## Bibliography
-[^rodan2012]: Rodan, Ali, and Peter Tiňo. “Simple deterministically constructed cycle reservoirs with regular jumps.” Neural computation 24.7 (2012): 1822-1852.
-[^rodan2010]: Rodan, Ali, and Peter Tiňo. “Minimum complexity echo state network.” IEEE transactions on neural networks 22.1 (2010): 131-144.
\ No newline at end of file
+[^rodan2012]: Rodan, Ali, and Peter Tiňo. “Simple deterministically constructed cycle reservoirs with regular jumps.” Neural computation 24.7 (2012): 1822-1852.
+[^rodan2010]: Rodan, Ali, and Peter Tiňo. “Minimum complexity echo state network.” IEEE transactions on neural networks 22.1 (2010): 131-144.
diff --git a/src/ReservoirComputing.jl b/src/ReservoirComputing.jl
index 5798dff3..8a9abab5 100644
--- a/src/ReservoirComputing.jl
+++ b/src/ReservoirComputing.jl
@@ -45,8 +45,25 @@ end
"""
Generative(prediction_len)
-This prediction methodology allows the models to produce an autonomous prediction, feeding the prediction into itself to generate the next step.
-The only parameter needed is the number of steps for the prediction.
+A prediction strategy that enables models to generate autonomous multi-step
+forecasts by recursively feeding their own outputs back as inputs for
+subsequent prediction steps.
+
+# Parameters
+
+ - `prediction_len::Int`: The number of future steps to predict.
+
+# Description
+
+The `Generative` prediction method allows a model to perform multi-step
+forecasting by using its own previous predictions as inputs for future predictions.
+This approach is especially useful in time series analysis, where each prediction
+depends on the preceding data points.
+
+At each step, the model takes the current input, generates a prediction,
+and then incorporates that prediction into the input for the next step.
+This recursive process continues until the specified
+number of prediction steps (`prediction_len`) is reached.
"""
struct Generative{T} <: AbstractPrediction
prediction_len::T
@@ -60,7 +77,27 @@ end
"""
Predictive(prediction_data)
-Given a set of labels as `prediction_data`, this method of prediction will return the corresponding labels in a standard Machine Learning fashion.
+A prediction strategy for supervised learning tasks,
+where a model predicts labels based on a provided set
+of input features (`prediction_data`).
+
+# Parameters
+
+ - `prediction_data`: The input data used for prediction, typically structured as a matrix
+ where each column represents a sample, and each row represents a feature.
+
+# Description
+
+The `Predictive` prediction method is a standard approach
+in supervised machine learning tasks. It uses the provided input data
+(`prediction_data`) to produce corresponding labels or outputs based
+on the learned relationships in the model. Unlike generative prediction,
+this method does not recursively feed predictions into the model;
+instead, it operates on fixed input data to produce a single batch of predictions.
+
+This method is suitable for tasks like classification,
+regression, or other use cases where the input features
+and the number of steps are predefined.
"""
function Predictive(prediction_data)
prediction_len = size(prediction_data, 2)
diff --git a/src/esn/deepesn.jl b/src/esn/deepesn.jl
index 478a31b3..636e0db1 100644
--- a/src/esn/deepesn.jl
+++ b/src/esn/deepesn.jl
@@ -55,11 +55,6 @@ temporal features.
- `matrix_type`: The type of matrix used for storing the training data.
Default is inferred from `train_data`.
-# Returns
-
- - A `DeepESN` instance configured according to the provided parameters
- and suitable for further training and prediction tasks.
-
# Example
```julia
@@ -73,10 +68,6 @@ deepESN = DeepESN(train_data, 10, 100, depth = 3, washout = 100)
train(deepESN, target_data)
prediction = predict(deepESN, new_data)
```
-
-The DeepESN model is ideal for tasks requiring the processing of sequences with
-complex temporal dependencies, benefiting from the multiple reservoirs to capture
-different levels of abstraction and temporal dynamics.
"""
function DeepESN(train_data,
in_size::Int,
diff --git a/src/esn/esn.jl b/src/esn/esn.jl
index e8322581..f53939a5 100644
--- a/src/esn/esn.jl
+++ b/src/esn/esn.jl
@@ -21,9 +21,9 @@ Creates an Echo State Network (ESN) using specified parameters and training data
- `train_data`: Matrix of training data (columns as time steps, rows as features).
- `variation`: Variation of ESN (default: `Default()`).
- - `input_layer`: Input layer of ESN (default: `DenseLayer()`).
- - `reservoir`: Reservoir of the ESN (default: `RandSparseReservoir(100)`).
- - `bias`: Bias vector for each time step (default: `NullLayer()`).
+ - `input_layer`: Input layer of ESN.
+ - `reservoir`: Reservoir of the ESN.
+ - `bias`: Bias vector for each time step.
- `reservoir_driver`: Mechanism for evolving reservoir states (default: `RNN()`).
- `nla_type`: Non-linear activation type (default: `NLADefault()`).
- `states_type`: Format for storing states (default: `StandardStates()`).