New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doubt on "pytorch-seq2seq/seq2seq/models/EncoderRNN.py" #159
Comments
@caozhen-alex we use For example:
And as for the h0, c0, the LSTM handles hidden and cell states on it's own, which means the hidden state of the LSTM is actually For example:
I'm not sure the maintainer is still active on the project. I hope this helped :) |
Closing this for now |
@pskrunner14 Hi, thank you very much for your explanation. I got the first problem. For the second one, why you let h0, c0 = hidden, I thought it should be h3, c3, or h5, c5. |
@caozhen-alex it's just an arbitrary naming convention I've used. The important part is that |
@pskrunner14 I c your point. Thank you very much for your clear explanation. Btw, How can I have a look at |
if self.variable_lengths: embedded = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths, batch_first=True) output, hidden = self.rnn(embedded) if self.variable_lengths: output, _ = nn.utils.rnn.pad_packed_sequence(output, batch_first=True)
Hi,
Why here goes two
if self.variable_lengths
?And this code doesn't identify the h0, c0, is that means they default to zero?
Looking forward to your response. @kylegao91
The text was updated successfully, but these errors were encountered: