Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing LSTM based sequence to sequence autoencoder #85

Closed
raouflamari opened this issue Apr 25, 2018 · 6 comments
Closed

Implementing LSTM based sequence to sequence autoencoder #85

raouflamari opened this issue Apr 25, 2018 · 6 comments

Comments

@raouflamari
Copy link

raouflamari commented Apr 25, 2018

I'm working on reconstructing a 10 timesteps sequence of 32 features.

Here is my Keras model

inputs = Input(shape=(timesteps, dimension))
encoded = LSTM(8)(inputs)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(dimension, return_sequences=True)(decoded)
sequence_autoencoder = Model(inputs, decoded)
sequence_autoencoder.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 10, 32)            0         
_________________________________________________________________
lstm_1 (LSTM)                (None, 8)                 1312      
_________________________________________________________________
repeat_vector_1 (RepeatVecto (None, 10, 8)             0         
_________________________________________________________________
lstm_2 (LSTM)                (None, 10, 32)            5248      
=================================================================
Total params: 6,560
Trainable params: 6,560
Non-trainable params: 0

My dataset is a pyspark dataframe. Each row fave a features column as a wrapped array (10, 32). I guess I need to have wrapped arrays in input and output. Does elephas support this?

@maxpumperla
Copy link
Owner

@raouflamari, no, we have no mechanism for this right now (partly because I'm not sure how to do this "properly").

@raouflamari
Copy link
Author

@maxpumperla Thank you

@nickkimer
Copy link

@raouflamari just curious but did you come up with a solution for this? I am looking at a similar problem and wondering how you were able to reshape the data to fit into LSTM with size (number of samples, number of timesteps, number of features)

@ann-kuruvilla
Copy link

So LSTM implementation of time series forecasting cant be implemented with elephas now right?

@danielenricocahall
Copy link
Collaborator

This is currently not supported - I can look into it if it's something that would benefit a lot of users? I would definitely want some input and assistance, as I do not know what the best way to implement this is.

@danielenricocahall
Copy link
Collaborator

Moved this issue to the new fork: danielenricocahall#10. Closing this for now but still on the radar!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants