Skip to content
This repository has been archived by the owner on Sep 27, 2020. It is now read-only.

Input shape issue and lack of bias. #15

Open
mikumeow opened this issue May 9, 2019 · 9 comments
Open

Input shape issue and lack of bias. #15

mikumeow opened this issue May 9, 2019 · 9 comments

Comments

@mikumeow
Copy link

mikumeow commented May 9, 2019

The first problem is that in ConvLSTM.forward, the code is using the same x = input in multiple timesteps.
I guess the input shape of forward func. shall be changed to

[sequence, bsize, channel, x, y] 

instead of the original

[bsize, channel, x, y]

And, x=input line shall be changed to

x=input[step]

for different steps.
I am still studying if it's appropriate to loop layers within loops of timesteps, but after training your current code(with the change I mentioned above), I can get decent outcomes.

The second problem is that in ConvLSTMCell, there're no biases. For example in

ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci)

While it should be something like

ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci + self.Bci)

But I don't know if such constants would affect the backward phase.

P.S. I'm myself a beginner so maybe I'm wrong. Please reply :)

@mikumeow
Copy link
Author

mikumeow commented May 9, 2019

By the way this is the result of the code after the x=input[step] change.
I'm training with a moving squares dataset adjusted from the Keras's ConvLSTM2D example code here
After 1 epoch * 5000 batches * 6 seqs per batch, here's a random result and the ground truth:
image

So great it worked!! Cheers to the author!
I'll try to add bias into ConvLSTMCell sometime later.

@yaorong0921
Copy link

@mikumeow
Hi,
could you please share your code which works on the Keras's example?
Many thanks :-)

@mikumeow
Copy link
Author

@yaorong0921
Hello!
Thanks for replying. But sorry, currently I am still adjusting this code because my later tries with it revealed some issues.

I am currently checking things like the loss func and how it works with batches in this model, in accordance to the implementation of ConvLSTM in Tensorflow.
Also, I was wrong about the bias because the model has already added bias here:

self.Wxi = nn.Conv2d(self.input_channels, self.hidden_channels, self.kernel_size, 1, self.padding, bias=True)

@EthanHe001
Copy link

EthanHe001 commented Jul 25, 2019

@mikumeow i get the similar problem with you, about same x(absence of sequence size).i think your method should be right .i will try it and give a response.surely,it doesn't lack of bias. and convlstm seems that it does't need parameter step(get from x.size()[0])

@jhhuang96
Copy link

@mikumeow if it's appropriate to loop layers within loops of timesteps?

@emjay73
Copy link

emjay73 commented Aug 8, 2019

I think iterating over timesteps seems reasonable

@tianfudhe
Copy link

@mikumeow if it's appropriate to loop layers within loops of timesteps?

It seems ok, since any hidden state is independent of future hidden states. So no need to compute the entire time-loop hidden states ahead. @mikumeow also mentioned that good decent is performed using this code when he did x=input[step]

@ghost
Copy link

ghost commented Jan 3, 2020

The first problem is that in ConvLSTM.forward, the code is using the same x = input in multiple timesteps.
I guess the input shape of forward func. shall be changed to
[sequence, bsize, channel, x, y]

instead of the original
[bsize, channel, x, y]

And, x=input line shall be changed to
x=input[step]

for different steps.
I am still studying if it's appropriate to loop layers within loops of timesteps, but after training your current code(with the change I mentioned above), I can get decent outcomes.
The second problem is that in ConvLSTMCell, there're no biases. For example in
ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci)
While it should be something like
ci = torch.sigmoid(self.Wxi(x) + self.Whi(h) + c * self.Wci + self.Bci)
But I don't know if such constants would affect the backward phase.
P.S. I'm myself a beginner so maybe I'm wrong. Please reply :)

Hi:I agree with your question about the lack of bias...

But now I am only a beginning scholar of Convlstm, I can understand the principle but cannot use it, so you have successfully used the author's Convlstm_pytorch, could you please send me the code of this successful prediction image (from Keras)?
I'm very grateful because learning convlstm is really painful

@to19851985
Copy link

could you please send me the code of this successful prediction image (from Keras)? Thank you

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants