-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrapping pyESN as a Keras layer #5
Comments
Interesting idea. But I have a hunch that the cleanest way to do this would be to extend Keras' own simpleRNN layer - after all, echo state networks are just well-initialized vanilla RNNs with training restricted to a readout layer. So if there's a way to define a custom initialiser for the simpleRNN's recurrent weights and exempt them from optimisation, most of the work should already be done. Except for all the little things I'm forgetting now. |
(less ambitiously, you could just use the existing pyESN as a data preprocessor - i.e. hack |
I'll probably try to extend Keras' simpleRNN layer. If I get it to work, I'll post here. Thanks! |
cool, I'm curious what comes out of it |
Hi @jonahweissman! I'm currently trying to implement the same architecture esn+keras network (in order to train with GPU and use different numbers of layers in combination with my esn), do you have any progress there or did you find something interesting somewhere? :) One important feature that I am not sure how to solve is to integrate the feedback from the readout layers to the reservoir in the architecture. Best, |
@gurbain I made a pull request to pyESN with the work I did. The basic idea is that instead of the readout layer being fed into a linear model, it is fed through a Keras model to produce outputs.
I initially tried to create a Keras layer, and also really struggled with this issue. I found an example of this technique in lstm_seq2seq, but it was too complicated for me. I believe pyESN integrates feedback from the readout layer into the reservoir by default. |
Hi @jonahweissman ! Super nice, that is exactly what I started to implement yesterday! However, I get the same issues I was already facing with a different implementation (and I am starting to guess it is a theoretical problem, and is not due to a bug): when I run freely the ESN+keras readout and I includes the feedback, the full closed-loop system (ESN+keras+feeback) start diverging. I tested with many different spectral radius, damping, noise or readout number values without success... Did you face the same problem? Would you have any idea why maybe? You can find an simple example of what I mean here (with the best result I could get, generally it diverges even faster): https://gist.github.com/gurbain/ba52af78d7be6eb2a23f48af15da2ce0 |
@gurbain That's happened to me a few times. I don't really know how to fix it, but it's possible that your spectral radius of 1.4 is the problem. The literature on Echo State Networks tends to recommend a spectral radius of less than (but close to) 1. The idea is that it's stable, but just barely. |
Well, I forgot to change it in the example but the problem is different: it does not work for any spectral radius and neither for any task!! Though the same works perfectly when I change the readout layer for a simple RLS rule, so I think there is something I would need to investigate in more details there... |
Hmm, that's odd. Sorry I can't be more helpful; I don't really have a very firm grasp on the theory behind all of this. |
hello, I am looking to the example and I do not understand why you use np.ones(..) on the X set for train on :
And if I want to do multivariate time-series classification with 8 variables and 5 classes output (with 5 classes -e.g. strong down , weak down , neutral , weak up , strong up) can I change the and the output layer to :
? how do I define the input time-steps number ? for the input shape ? for example when defining the input for a LSTM three dimensions are with shape : (batch_size, timesteps, variables). what is the input shape here ? |
@rjpg Both examples use the ESN as a frequency generator. You give the model some kind of period time series, and it learns to match the waveform. Instead of this project, you might be better served by just Keras. They have an example of LSTM sequence classification. |
hello, I have my problem running with LSTM and it is ok. I would like to try ESN to the same problem ... I was trying to see how to adapt the inputs and outputs of my problem to this implementation. Here is my problem with LSTM : The base NN is like
As you can see I have 9 time-series and for each 128 time-steps and an output layer of 5 neurons with softmax to classify the multivariate time-series input in one of 5 classes. I would like to see if ESN does some improvement. PS: in my example, the "base-line" accuracy is 20% because I have 5 classes well balanced (if the model responds randomly it will have the accuracy of 20%). With LSTM I reach 29% , 9% above base-line. It sounds poor but for my problem, it is already good... |
I've been using pyESN for a few weeks now, and I've found it to be very helpful. Thank you!
I've been using it in conjunction with a Keras network, and I was thinking that it could be really handy to turn this into a Keras layer. Keras encourages making your own layers, so I don't think it would be too technically challenging. I don't have a lot of experience with object-oriented Python, but I'd be willing to take a shot at it.
My initial thinking is to create a
Reservoir
layer that wraps the current pyESN class and then use Keras's existingDense
layers for input and output weights.Would this be a good addition to this project? Is there anything else I should consider?
The text was updated successfully, but these errors were encountered: