Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about Encoder_attn in decoder layer #1

Closed
gaopengcuhk opened this issue Jul 18, 2020 · 1 comment
Closed

Question about Encoder_attn in decoder layer #1

gaopengcuhk opened this issue Jul 18, 2020 · 1 comment

Comments

@gaopengcuhk
Copy link

It seems that in the decoder, both self attention and TaLK has been used. Have you ever try to replace Self Attention(Co attention between encoder input) with TaLK?

@lioutasb
Copy link
Owner

TaLK convolutions, similar to Dynamic convolutions [Wu et al., 2019], is a method for replacing self-attention. The current form of TaLK convolutions doesn't support replacing co-attention between two different sequences (i.e. source and target sentences).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants