You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that in the decoder, both self attention and TaLK has been used. Have you ever try to replace Self Attention(Co attention between encoder input) with TaLK?
The text was updated successfully, but these errors were encountered:
TaLK convolutions, similar to Dynamic convolutions [Wu et al., 2019], is a method for replacing self-attention. The current form of TaLK convolutions doesn't support replacing co-attention between two different sequences (i.e. source and target sentences).
It seems that in the decoder, both self attention and TaLK has been used. Have you ever try to replace Self Attention(Co attention between encoder input) with TaLK?
The text was updated successfully, but these errors were encountered: