Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linear Transformers are Fast Weight Memory Systems #70

Open
angeloskath opened this issue Mar 10, 2021 · 0 comments
Open

Linear Transformers are Fast Weight Memory Systems #70

angeloskath opened this issue Mar 10, 2021 · 0 comments
Labels
new-attention Add a new attention implementation

Comments

@angeloskath
Copy link
Collaborator

We should add the update rule for linear attention defined in https://arxiv.org/pdf/2102.11174.pdf .

It will probably need more than simply updating the CausalLinearAttention .

@angeloskath angeloskath added the new-attention Add a new attention implementation label Mar 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new-attention Add a new attention implementation
Projects
None yet
Development

No branches or pull requests

1 participant