-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Order of weights in LSTM #3088
Comments
i don't know which version you are using now. |
model = Sequential()
model.add(LSTM(4,input_dim=5,input_length=N,return_sequences=True))
for e in zip(model.layers[0].trainable_weights, model.layers[0].get_weights()):
print('Param %s:\n%s' % (e[0],e[1]))
|
@ChristianThomae in the above solution it is no more giving the same details.Rather it is simply giving 'lstm_1/kernel:0 and lstm_1/recurrent_kernel:0 and so on. Any way to display or find out the exact order of the output for get_weights especally the order of I,F,C,O since I need some specific values to be extracted from these for further processing as well. |
If I understand https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py#L1863 correctly I would say the order is i, f, c, o for the kernel, recurrent_kernel and bias respectively. |
If you are using Keras 2.2.0 When you print
you should see three tensors:
where number_of_units is your number of neurons. Try:
That is because each tensor contains weights for four LSTM units (in that order):
Therefore in order to extract weights you can simply use slice operator:
Source: keras code |
I'm trying to export an LSTM layer from Keras to a portable C implementation. I accept the possibility there is a bug in my code, but assuming there isn't, I can't figure out the order of the weights / gates in the LSTM layer.
model = Sequential()
model.add(LSTM(4,input_dim=5,input_length=N,return_sequences=True))
shapes = [x.shape for x in model.get_weights()]
print shapes
[(5, 4),
(4, 4),
(4,),
(5, 4),
(4, 4),
(4,),
(5, 4),
(4, 4),
(4,),
(5, 4),
(4, 4),
(4,)]
What I see is
But which weight set goes to which gate? The third set of weights have biases initialized to 1.0, so I'm assuming that's the forget gates.
Looking in recurrent.py, I see something like this:
i = self.inner_activation(z0)
f = self.inner_activation(z1)
c = f * c_tm1 + i * self.activation(z2)
o = self.inner_activation(z3)
But i,f,c,o is not the order, because of the biases set to 1.0. So I'm kind of confused, and would appreciate the help.
The text was updated successfully, but these errors were encountered: