Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feeding the generated "filter images" from convnet2D to individuals LSTMs (?) is it possible ? #8878

Closed
rjpg opened this issue Dec 24, 2017 · 6 comments

Comments

@rjpg
Copy link

rjpg commented Dec 24, 2017

hello I wanted to feed several LSTMs (that has input 2D) with the output of a convnet2D (that has output dimension 3D [#filter image, width, hight]).

@rjpg rjpg changed the title Feeding the generated "filter images" from convnet2D to individual LSTMs (?) is it possible ? Feeding the generated "filter generated images" from convnet2D to individual LSTMs (?) is it possible ? Dec 24, 2017
@rjpg rjpg changed the title Feeding the generated "filter generated images" from convnet2D to individual LSTMs (?) is it possible ? Feeding the generated "filter images" from convnet2D to individual LSTMs (?) is it possible ? Dec 24, 2017
@rjpg rjpg changed the title Feeding the generated "filter images" from convnet2D to individual LSTMs (?) is it possible ? Feeding the generated "filter images" from convnet2D to individuals LSTMs (?) is it possible ? Dec 24, 2017
@gabrieldemarmiesse
Copy link
Contributor

Hi! You can! Just use a Reshape layer to make the tensor be 2D instead of 3D.

Don't forget to close this issue and remember that stackoverflow is more appropriate for questions like this.
Keras github issues are to report bugs in the keras codebase. Thank you!

@rjpg
Copy link
Author

rjpg commented Dec 30, 2017

Hello thanks for the reply (reshape only does not work). however I have solve the problem but the accuracy results are not that good ... here it is :

https://github.com/rjpg/bftensor/blob/master/Autoencoder/src/RJPGNet.ipynb

I guess "ensemble on-the-fly" is not that good I will do-it step by step ... conv (train with autoencoder ) then each "filtered image" (from the encoder/middle layer) to one lsts (train each separate) and then essemble ... 3 separate training stages

@gabrieldemarmiesse
Copy link
Contributor

If you want your lstm to be able to communicate with each others, you need to use only one lstm layer. Bonus: it'll be faster.

out1=Dropout(0.40)(conv2)
out1 = Reshape(5,25)(out1)
out_lstm = LSTM(40,input_shape=(5,25))(out1)
this way the lstms are processing image after image. Unlike your code where each image is processed independantly (thus making the lstm layer no more useful than a dense layer).

@rjpg
Copy link
Author

rjpg commented Dec 30, 2017

I was thing of putting all images, that represent the time series (time sample x variables ) in one like 5x5x5 (number of "filtered images" x time samples (compressed by conv layers) x variables )into a (25 x 5) . Thys way flatten the variables and keet the time sequence. but this is not a linear reshape ... Iam seeing how to do that ...

I think your idea Reshape(5,25) has a flaw : that way I will be mixing timesteps with variable into the LSTM (I think) each iteration (not good for lstm). The goal, if we want to have one LSTM, is to feed one sequece of 5 time-samples and the 25 variables (5x5 variables : 5 from each "filtered image" line from the conv layers)

Do you understand ?

the reshape has to be something like this (ex 3x3x3 into 3x9):

filter img1                 filter img2                  filter img3
[[1,2,3],                       [[4,5,6],                        [[7,8,9],
[1,2,3],                        [4,5,6],                          [7,8,9],    
[1,2,3]]                        [4,5,6]]                         [7,8,9]]

intended result :
[[1,2,3,4,5,6,7,8,9]
[1,2,3,4,5,6,7,8,9]
[1,2,3,4,5,6,7,8,9]]

I dont think with only a reshape this is possible ...

your rechapes result in this :
[[1,2,3,1,2,3,1,2,3]
[4,5,6,4,5,6,4,5,6]
[7,8,9,7,8,9,7,8,9]]

@gabrieldemarmiesse
Copy link
Contributor

I'm sorry then, I don't really know how to do that.
I hope you find the answer!

@rjpg
Copy link
Author

rjpg commented Dec 30, 2017

hello,

I came across this that refers to the swapaoxes to join the images with out mixing dimensions ...

https://stackoverflow.com/questions/36905288/how-would-you-reshape-a-collection-of-images-from-numpy-arrays-into-one-big-imag

BUT I am not seeing how to use it to put the output of the convnet "filtered images" side by side ?

If you can help me using this swapaoxes to do the appropriate reshape it would be nice !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants