Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attention explanation #11

Closed
smolendawid opened this issue Dec 5, 2017 · 2 comments
Closed

Attention explanation #11

smolendawid opened this issue Dec 5, 2017 · 2 comments

Comments

@smolendawid
Copy link

Is there any paper or tutorial that describes exactly the same attention mechanism that is used in this repository? I mean the fact that attention values are added, not concatenated, the usage of LinearND, and the fact that there is a convolution. Is there any place with the theory given?
Thank you

@awni
Copy link
Owner

awni commented Dec 6, 2017

Here are two references you can take a look at:

The attention (NNAttention) is mostly the same. The only difference is it doesn't do a linear multiply before the nonlinearity. I don't think that matrix multiply is necessary there as you can add more transformations to the encoded and decoded state in the encoder and decoder respectively. However, I haven't gotten around to testing this rigorously yet. (I expect it would be minor to no difference).

As for LinearND this is just a helper layer to do a linear transformation on something with the shape [batch, time, hidden dim] to reshape it to [batch*time, hidden dim] before the matrix operation.

@awni awni closed this as completed Dec 6, 2017
@smolendawid
Copy link
Author

I appreciate your help very much, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants