-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remember initial state #3
Conversation
trying out the change -- |
/usr/bin/python3.5 /home/vivanov/PycharmProjects/PLSTM/simplePhasedLSTM.py During handling of the above exception, another exception occurred: Traceback (most recent call last): Caused by op '0/RNN/while/PhasedLSTMCell/concat', defined at: InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [320,1] vs. shape[1] = [32,100] |
now it remembers states between runs |
|
||
outputs = multiPLSTM(_X, lens, FLAGS.n_layers, FLAGS.n_hidden, n_input) | ||
initial_states = [tf.nn.rnn_cell.LSTMStateTuple(tf.zeros([FLAGS.batch_size, FLAGS.n_hidden], tf.float32), tf.zeros([FLAGS.batch_size, FLAGS.n_hidden], tf.float32)) for _ in range(FLAGS.n_layers)] | ||
outputs, initial_states = multiPLSTM(_X, lens, FLAGS.n_layers, FLAGS.n_hidden, n_input, initial_states) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
anyway to make it work with tf.nn.rnn_cell.MultiRNNCell
? it's more intitutive and can use cell.zero_state
to generate the initial_states
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think it's possible but wont make initial_state any simpler
same number of words and same sense
i tracked the bug in my code -- it's wrong shapes during test -- will fix now |
different batch size during test -- fixed that -- now testing |
I want it to remember states like an ordinary multilayer LSTM would. I keep each layer's state in a list. Now PLSTM uses previous end state as initial states of the current run |
/usr/bin/python3.5 /home/vivanov/PycharmProjects/PLSTM/simplePhasedLSTM.py |
here's a full run -- it works |
i wonder how to keep track of initial_states in a tensorflow's summary -- can you please help me with that? That would prove it works correctly |
+-----------+----------+------------+ |
Hey! Very nice job, |
Sure. I'll do it in an hour. Just a flag that will reset it on every iteration 23.12.2016, 131:03 ПП, "Enea Ceolini" <notifications@github.com>:Hey! Very nice job,
Can we make the initial state argument optional? I'd like to keep it such that people can also not specify the initial state.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or mute the thread.
|
Yeah actually I can do it now! I'll merge, update with this flag and add the summary for the initial state. |
that's great -- looking forward to try it out |
No description provided.