Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Additive Attention #4

Closed
8 tasks done
Rishit-dagli opened this issue Aug 30, 2021 · 0 comments · Fixed by #8
Closed
8 tasks done

Implement Additive Attention #4

Rishit-dagli opened this issue Aug 30, 2021 · 0 comments · Fixed by #8
Milestone

Comments

@Rishit-dagli
Copy link
Owner

Rishit-dagli commented Aug 30, 2021

Implement Additive Attention as a TensorFlow layer:

  • Figure out using rotary embeddings
  • Add masking functionality
  • Relative Position embeddings
  • Calculate query attention logits
  • Calculate Global Query tokens
  • Calculate key attention logits
  • Calculate Global Key tokens
  • Add queries as residuals
@Rishit-dagli Rishit-dagli created this issue from a note in Fast-Transformer (To do) Aug 30, 2021
@Rishit-dagli Rishit-dagli added this to the 0.1.0 milestone Aug 31, 2021
Fast-Transformer automation moved this from To do to Done Aug 31, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging a pull request may close this issue.

1 participant