New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compute embedding distances with torch.cdist #1459
Conversation
cc @patrickvonplaten, not what we discussed but this is an effective three liner |
The documentation is not available anymore as the PR was closed or merged. |
LGTM! thanks! |
Hey @blefaudeux, How to you use this feature I think it's only used in decoding if |
It's in the superres path, not doing that just eats 4GB ram when decoding for nothing. It's very much not perfect though, I'm looking at better options, but better than not doing that :) |
improves on #1434 |
cc @patil-suraj, if you're interested in high res superres |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the PR, this looks good to me! Will run the slow tests and then merge.
Also for high resolution upscaling, I'm exploring another option in #1521, and it seems to work well.
thanks for the link ! for this PR I think it's always worth it because no tradeoff, it's just better than the previous three lines, but it's not enough to enable high res that's for sure ! No issues with borders when splitting the decode ? Another option if the convs were depth wise would have been to compute them depth-first (à la "Rerformer" years ago), but that's probably not a reasonable option so I guess that splitting is as good as it gets ? |
small but mighty
small but mighty
small but mighty
20Go -> 16Go Ram use for some workloads, same speed (you don´t have to materialize intermediates with torch.cdist)