New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feeding the generated "filter images" from convnet2D to individuals LSTMs (?) is it possible ? #8878
Comments
Hi! You can! Just use a Reshape layer to make the tensor be 2D instead of 3D. Don't forget to close this issue and remember that stackoverflow is more appropriate for questions like this. |
Hello thanks for the reply (reshape only does not work). however I have solve the problem but the accuracy results are not that good ... here it is : https://github.com/rjpg/bftensor/blob/master/Autoencoder/src/RJPGNet.ipynb I guess "ensemble on-the-fly" is not that good I will do-it step by step ... conv (train with autoencoder ) then each "filtered image" (from the encoder/middle layer) to one lsts (train each separate) and then essemble ... 3 separate training stages |
If you want your lstm to be able to communicate with each others, you need to use only one lstm layer. Bonus: it'll be faster. out1=Dropout(0.40)(conv2) |
I was thing of putting all images, that represent the time series (time sample x variables ) in one like 5x5x5 (number of "filtered images" x time samples (compressed by conv layers) x variables )into a (25 x 5) . Thys way flatten the variables and keet the time sequence. but this is not a linear reshape ... Iam seeing how to do that ... I think your idea Reshape(5,25) has a flaw : that way I will be mixing timesteps with variable into the LSTM (I think) each iteration (not good for lstm). The goal, if we want to have one LSTM, is to feed one sequece of 5 time-samples and the 25 variables (5x5 variables : 5 from each "filtered image" line from the conv layers) Do you understand ? the reshape has to be something like this (ex 3x3x3 into 3x9):
I dont think with only a reshape this is possible ... your rechapes result in this : |
I'm sorry then, I don't really know how to do that. |
hello, I came across this that refers to the swapaoxes to join the images with out mixing dimensions ... BUT I am not seeing how to use it to put the output of the convnet "filtered images" side by side ? If you can help me using this swapaoxes to do the appropriate reshape it would be nice ! |
hello I wanted to feed several LSTMs (that has input 2D) with the output of a convnet2D (that has output dimension 3D [#filter image, width, hight]).
The text was updated successfully, but these errors were encountered: