You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
first of all congrats to the paper, it's really interesting and am looking forward to seeing its impact on the space!
I wanted to point out that a few colleagues and me followed a similar approach to introduce translational equivariance in kernelizable attention (as is implemented in the Performer) for image classification tasks and posted it on ArXiv in the beginning of February https://arxiv.org/abs/2102.07680.
While the approach proposed in your work is more generic, we would highly appreciate if you could also refer to our prior work in the publication.
Best,
Max
The text was updated successfully, but these errors were encountered:
Hey Everybody,
first of all congrats to the paper, it's really interesting and am looking forward to seeing its impact on the space!
I wanted to point out that a few colleagues and me followed a similar approach to introduce translational equivariance in kernelizable attention (as is implemented in the Performer) for image classification tasks and posted it on ArXiv in the beginning of February https://arxiv.org/abs/2102.07680.
While the approach proposed in your work is more generic, we would highly appreciate if you could also refer to our prior work in the publication.
Best,
Max
The text was updated successfully, but these errors were encountered: