Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Token merging #5

Open
lalalune opened this issue Nov 1, 2022 · 1 comment
Open

Token merging #5

lalalune opened this issue Nov 1, 2022 · 1 comment

Comments

@lalalune
Copy link

lalalune commented Nov 1, 2022

Very interested in the work you're doing. Speed and memory efficiency are crucial for anyone trying to generate at scale.

We've implemented Token Merging: facebookresearch/ToMe#7

This has a speed overhead increase of about 15% from the naive implementation at 512x512, although this goes up as array sizes increase. The memory overhead reduction is significant, and can allow for much larger image generation.

@tfernd
Copy link
Owner

tfernd commented Nov 4, 2022

Thanks for letting me know about it!
I implemented it and got about a 16% increase with batch-size=8 and merging 50% tokens.

I re-implemented it here: https://github.com/tfernd/sd-fused/blob/master/sd_fused/layers/fn/tome.py
The reason I did so was two-fold.

  1. I needed r to be a float or int, representing a percentage or the total number of merged tokens. This makes it easier to set r.
  2. To insert del in some tensors, like a,b and score. I have seen floating around in some SD implementations. I didn't fully debug this, but it seems to reduce memory.

Let me know if you want me to do a PR with these changes so I can use your repo instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants