-
-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementing LSTM based sequence to sequence autoencoder #85
Comments
@raouflamari, no, we have no mechanism for this right now (partly because I'm not sure how to do this "properly"). |
@maxpumperla Thank you |
@raouflamari just curious but did you come up with a solution for this? I am looking at a similar problem and wondering how you were able to reshape the data to fit into LSTM with size (number of samples, number of timesteps, number of features) |
So LSTM implementation of time series forecasting cant be implemented with elephas now right? |
This is currently not supported - I can look into it if it's something that would benefit a lot of users? I would definitely want some input and assistance, as I do not know what the best way to implement this is. |
Moved this issue to the new fork: danielenricocahall#10. Closing this for now but still on the radar! |
I'm working on reconstructing a 10 timesteps sequence of 32 features.
Here is my Keras model
My dataset is a pyspark dataframe. Each row fave a features column as a wrapped array (10, 32). I guess I need to have wrapped arrays in input and output. Does elephas support this?
The text was updated successfully, but these errors were encountered: