New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hope intermediate state in dynamic_rnn #5731
Comments
You can write a variant of the LSTMCell that returns both state tensors as part of the output, if you need both c and h state for each time step. If you just need the h state, that's the output of reach time step. |
@ebrevdo Thanks for reply. Really helpful. |
@ebrevdo May I ask a question? For each time step of LSTM there is a memory cell value c, a hidden state h, and an output y. So what you were suggesting is that the outputs returned by dynamic RNN contain h rather than y for each step? Would that also mean the returned output_states is actually outputs[-1]? Thank you. |
@WolfNiu I can answer your question. |
I tried your suggestion but I ran into such trouble doing it, can you take a look at my stackoverflow post? Thanks |
@Sraw am I able to assign specific values for lstm gates? for instance always 1 for input and.... |
To implement a attention machine, I need the intermediate state to calculate the attention, but dynamic_rnn doesn't return the intermediate states but only final state.
Hope the dynamic_rnn could return intermediate states so that we can implement a attention machine with dynamic network.
The text was updated successfully, but these errors were encountered: