Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross attention control and xformers memory efficient attention #26

Closed
DavidePaglieri opened this issue Dec 13, 2022 · 0 comments
Closed

Comments

@DavidePaglieri
Copy link

Hi awesome paper!

Is it possible to integrate cross attention control mechanism in the memory efficient attention formula?

From what I understand, cross attention control modifies the attention map to make edits, but memory efficient attention doesn't compute attention in the same way, and doesn't explicitly compute the attention map. How can we tweak the memory efficient attention formula to support cross attention control? Is it possible to use both together?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant