New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attention Mechanism Implementation Issue #1472

Closed
zzjin13 opened this Issue Jan 15, 2016 · 23 comments

Comments

Projects
None yet
@zzjin13
Copy link

zzjin13 commented Jan 15, 2016

The problem is that I have an output a from a LSTM layer with the shape of (batch,step,hidden), and an output b from another layer, which is called the weights(or attention), with the shape of (batch,step), and I have no idea how to do an operation which computes the weighted sum given the two outputs.
like this:

a0, a1, a2 = a.shape
b0, b1 = b.shape
for i in range(a0):
for j in range(a1):
for k in range(a2):
c[i][k]+=a[i][j][k]*b[i][j]

Can this be done in keras?

@jfsantos

This comment has been minimized.

Copy link
Contributor

jfsantos commented Jan 15, 2016

Wouldn't a TimeDistributedMerge layer work for you?

@farizrahman4u

This comment has been minimized.

Copy link
Member

farizrahman4u commented Jan 15, 2016

@zzjin13 Here you go..

from keras.layers.core import*
from keras.models import Sequential

input_dim = 32
hidden = 32

#The LSTM  model -  output_shape = (batch, step, hidden)
model1 = Sequential()
model1.add(LSTM(input_dim=input_dim, output_dim=hidden, input_length=step, return_sequences=True))

#The weight model  - actual output shape  = (batch, step)
# after reshape : output_shape = (batch, step,  hidden)
model2 = Sequential()
model2.add(Dense(input_dim=input_dim, output_dim=step))
model2.add(Activation('softmax')) # Learn a probability distribution over each  step.
#Reshape to match LSTM's output shape, so that we can do element-wise multiplication.
model2.add(RepeatVector(hidden))
model2.add(Permute(2, 1))

#The final model which gives the weighted sum:
model = Sequential()
model.add(Merge([model1, model2], 'mul'))  # Multiply each element with corresponding weight a[i][j][k] * b[i][j]
model.add(TimeDistributedMerge('sum')) # Sum the weighted elements.

model.compile(loss='mse', optimizer='sgd')

Hope it helps.

@zzjin13

This comment has been minimized.

Copy link

zzjin13 commented Jan 17, 2016

I found this code cannot work properly when the input of the LSTM is masked.
How can I solve it with masked input?

@philipperemy

This comment has been minimized.

Copy link

philipperemy commented Mar 28, 2016

@zzjin13 Can you clarity? You pad with 0 and you give mask=True to all your LSTM layers? This is masking for you?
Because the logic is exactly the same, masked or not in this case.

@ylqfp

This comment has been minimized.

Copy link

ylqfp commented Apr 28, 2016

@farizrahman4u Thanks so much! I'll have a try.

@philipperemy

This comment has been minimized.

Copy link

philipperemy commented May 23, 2017

I've just written a very simple Hello world for attention with visualisations (with the new Keras syntax)

Have a look: https://github.com/philipperemy/keras-simple-attention-mechanism

It might help you :)

@abali96

This comment has been minimized.

Copy link

abali96 commented May 24, 2017

@philipperemy which form of attention is this? Is there a specific paper you referenced in developing your attention model? Thanks for open sourcing by the way!

@philipperemy

This comment has been minimized.

Copy link

philipperemy commented May 25, 2017

Thanks for your feedback! It's the basic attention mechanism when you derive a probability distribution over your time states with n-D time series (no encoder-decoder here).

@abali96 I didn't have any papers in mind when I implemented it.

But a good paper you can have a look at is this one:

@nnulcm

This comment has been minimized.

Copy link

nnulcm commented Jul 19, 2017

Can I have a pre-trained attention layer weights?Such as the probability distribution of each word?

@xiaoleihuang

This comment has been minimized.

Copy link

xiaoleihuang commented Jul 19, 2017

@philipperemy check out Bengio's paper
I think there are some missing implementations in your codes.

@philipperemy

This comment has been minimized.

Copy link

philipperemy commented Jul 20, 2017

@xiaoleihuang this one is for Neural Machine Translation, basically sequence to sequence attention. My implementation does not deal with this.

@xiaoleihuang

This comment has been minimized.

Copy link

xiaoleihuang commented Jul 20, 2017

@philipperemy Hi, I am not sure what formula you are based on. According to the formulas in the paper, I mean I did not see two basic steps in your attention_3d_block function

  1. calculate the dot production of eij;
  2. the weight aij is not normalized;
  3. there is a multiplication step in your merge, but shall it follows with a sum?
@philipperemy

This comment has been minimized.

Copy link

philipperemy commented Aug 18, 2017

@xiaoleihuang I didn't base my implementation on any known paper. My attention here is just a softmax mask inside the network. It basically gives you a normalized distribution of the importance of each time step (or unit) regarding an input.

Intrinsically, it should not help the model perform better but it should help the user understand which time steps contribute to the prediction of the model.

@xiaoleihuang

This comment has been minimized.

Copy link

xiaoleihuang commented Aug 18, 2017

Hi, @philipperemy I see. I can understand your intuition. You utilize the input and compute it as a kind of "weights" and it will automatically optimized by the Neural Network (Permute->Reshape->Dense). In order to do matrix multiplication, you repeat the output from Dense. But there is an issue with your implementation: the attention defined in the paper is a dot production, but yours is a vector. What theory would supports such an operation? I am a little confused. But I think yours is a good idea. Additionally, I found there might be some issue with you K.function part in get_activations.

@bicepjai

This comment has been minimized.

Copy link

bicepjai commented Sep 9, 2017

Is this issue/discussion related to #4962

@v1nc3nt27

This comment has been minimized.

Copy link

v1nc3nt27 commented Oct 12, 2017

@xiaoleihuang There is a dot product inside the dense, isn't there? The permuted Input is multiplied with the weight matrix in the dense layer.

@philipperemy Did you manage to use your attention mechanism successfully in a real project? I've tested it, but the score doesn't really change and the word's highlighted are rather random from what I can see. Would be nice to see it working in a bigger context. By the way, what is the Reshape layer for?

@xu-song

This comment has been minimized.

Copy link

xu-song commented Jan 16, 2018

@philipperemy @farizrahman4u
The shape in farizrahman4u's case has some mistake. I revise it as the following

#The weight model  - actual output shape  = (batch, step)
# after reshape : output_shape = (batch, step,  hidden)
model2 = Sequential()                            # input_shape  = (batch, step, input_dim)
model2.add(Lambda(lambda x: K.mean(x, axis=2)))  # output_shape = (batch, step)
model2.add(Activation('softmax'))                # output_shape = (batch, step)
model2.add(RepeatVector(hidden))                 # output_shape = (batch, hidden, step)
model2.add(Permute(2, 1))                        # output_shape = (batch, step, hidden)
@Ashima16

This comment has been minimized.

Copy link

Ashima16 commented May 24, 2018

Hi @farizrahman4u, I am getting "init() takes 2 positional arguments but 3 were given" error in the above code mentioned by you..Could you please help, as I am new to all this.

@caugusta

This comment has been minimized.

Copy link

caugusta commented May 24, 2018

Hi @Ashima16, that usually means you've provided too many inputs to a function or method. Make sure, for example, that you're providing only model1 and model2?

@Ashima16

This comment has been minimized.

Copy link

Ashima16 commented May 25, 2018

Hi @caugusta, thanks for the response..

I tried executing the same code as mentioned above with step value=1..
I am getting the following error:


TypeError Traceback (most recent call last)
in ()
17 #Reshape to match LSTM's output shape, so that we can do element-wise multiplication.
18 model2.add(RepeatVector(hidden))
---> 19 model2.add(Permute(2, 1))
20
21 #The final model which gives the weighted sum:

TypeError: init() takes 2 positional arguments but 3 were given

@caugusta

This comment has been minimized.

Copy link

caugusta commented May 26, 2018

Hi @Ashima16, can you run the original code? If not, then it might be that Keras has updated the API since this code was written. Permute() might no longer work the way the original code expects.

@likejazz

This comment has been minimized.

Copy link

likejazz commented Jun 26, 2018

@Ashima16
It needs another pair of brackets. try this.

model2.add(Permute((2, 1)))
@salmonlon

This comment has been minimized.

Copy link

salmonlon commented Oct 26, 2018

I found this code cannot work properly when the input of the LSTM is masked.
How can I solve it with masked input?

Hi @zzjin13,

I encountered the same issue with the masked input, but I don't think this is related to the implementation of the attention model.

As attention model is trying to learn the weightings of the inputs based on the input, a masked input with leading/trailing 0s in the sequence will trick the attention model to make it think this unique pattern is what is should pay attention to, resulting in your model is only paying attention to the mask not the actual inputs.

I would recommend applying sample weighting to your training sequence so the mask wouldn't contribute to the gradient update. You can pass a weighting matrix to the sample_weight argument of the model.fit() function

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment