-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LSTM Autoencoder #1401
Comments
What you posted does do what you figure describes. You don't need an m = Sequential()
m.add(LSTM(5, input_dim=2, return_sequences=True))
m.add(LSTM(5, return_sequences=True)) Also don't train RNNs with SGD. Use RMSprop instead. |
Thank you very much. I assume that then i can save the weights of the 1st Lstm's final state weigths using |
In my experience Adam does better than RMSprop |
Hello again I would like to get the outputs of the first layer of the following model
when i type
i get
Is there a way to get the final output as a numpy array instead of the weights? I believe the output is something like this :
Thank you in advance |
Hi, you can save the weights of the first LSTM, create a separate model with only one LSTM layer and set the weights of this LSTM to your saved weights. After that you can use |
Truthfully this is not what i want. I do not want to use the trained LSTM I want to use the output of the LSTM as an embedding. So i do not want 4 matrices ( the trained weights of the LSTM ) This matrix is the output of the 1st LSTM which was used as input to the |
If you feed the same data to this new network with one LSTM layer you will get exactly what you want as the result of the predictions. You can save these results and use it anywhere you want. |
If there are 100 instances there will be 100 autoencoders. I want an autoencoder to overtrain on a specific instance and extract an embedding. Think of it as compressing all the information for a text in a vector of size 10. I want to use these 100 embeddings as input on another network. (size: 100 X 10) I cannot connect all LSTMs at the same time and feed the original data once more. Neither could i connect one lstm at a time. I just want the output of the 1st layer to numpy array. How can i get it ? |
Thank you. This is what i was searching for! |
@dpappas , I am also facing the same issue. I tried the above link and I get the attribute error |
@fchollet, using "return_sequences=True" does NOT produce what is described in the figure! |
@GUR9000 I think you are right. At every time step, the decoder needs to take the output from last step of decoder rather than the output of the encoder. |
I want to build a LSTM autoencoder.
Code:
But for the decoded step it returns Thank you in advance. |
Your encoded LSTM returns only the last output of the LSTM
Finally your decoded LSTM needs a proper number of nodes for the lstm.
or change the number 10 to the size you want. I suggest you read the documentation and some explanation on LSTMs |
@dpappas : Thank you for your answer. |
You are wright. Maybe in keras you could do it with the step funtion of the lstm or a callback function to use the output of the decoder from the previous timestep as input to the new timestep |
Perhaps this has the desired effect:
|
Hello everyone and happy new year
I am trying to create an LSTM Autoencoder as shown on the image bellow.
The encoder consumes the input "the cat sat",
and creates a vector depicted as the big red arrow.
The decoder takes this vector and tries to reconstruct the sequence
given the position in the sentence.
I would like to save this vector (big red arrow) to use it on another model.
The code i have wrote so far is the following:
It is not clear to me if the code above does what i ask for.
If i do not use
return_sequences=True
it yields an error.Should i use a graph model to do exactly what i ask for?
Thank you in advance for your help.
The text was updated successfully, but these errors were encountered: