-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ESN Parameter Effects #157
Comments
Hello,
In your case, the spectral radius you have if you don't specify its value is around ~10. You can get its value by using the reservoirpy.observables.spectral_radius method: from reservoirpy.nodes import Reservoir
from reservoirpy.observables import spectral_radius
import numpy as np
my_reservoir = Reservoir(
units=1000,
# sr=1.0
)
my_reservoir.initialize(np.random.normal(size=(12, 1)))
spectral_radius(my_reservoir.W) The spectral radius of the recurrent connection weights has a significant impact on the performances of the task, so it's not a surprise that you have such bad performances if you don't specify it. You can read more about its impact in the documentation. |
This comment was marked as resolved.
This comment was marked as resolved.
Hello, It seems you have more units in the default reservoir (500) than in the optimized version (150). That could explain the performance decrease. For the smoothness of your output, many parameters comes into play, with high inter-dependencies. |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
Hello, Thank you for testing new tasks with ReservoirPy. Here there are several factors that could influence your performances.
This kind of plot will help you to understand what are the hyperparameters that give the most robust results. I hope this helps. If you show us this kind of plot for all the hyperparameters, we could help you interpret them. |
I am trying to perform system identification using reservoir computing. I have .mat data consisting of the time and values of the input and output to a system. I have attached this data (named "train_trial_60s.xlsx"), as well as a script (named "rpy_RCP.txt") that reads in this data, separates it into training and testing (trial) sets, trains an ESN on the training set, and evaluates it on the testing set. The original data was a .MAT file (train_trial_60s.mat) while the original code was a Python script (rpy_RCP.py)
Originally, the reservoir of my ESN contained 1000 units, a leaking rate 'lr' = 1.0, and a spectral radius 'sr' = 1.0. This resulted in an RMSE of 45.719 and R^2 of -272.420. However, when I repeat the experiment with the leaking rate and spectral radius omitted (keeping the number of units equivalent, the regression improves to an RMSE of 0.738 and R^2 of 0.918.
What does this mean? Why does removing the leaking rate and spectral radius affect the performance to this degree (especially when the documentation states that the default value for the leaking rate = 1, so why does explicitly stating this lead to a worse performance)?
I also noticed that, for the case where the ESN is trained with just the number of units defined, that despite the better performance, the regression result appears very 'noisy'. I am curious as to why this is as well.
The text was updated successfully, but these errors were encountered: