Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting to run JoePenna / Dreambooth-Stable-Diffusion in a 16GB Google Colab #51

Closed
jslegers opened this issue Oct 6, 2022 · 2 comments

Comments

@jslegers
Copy link

jslegers commented Oct 6, 2022

I've currently been using @ShivamShrirao's fork of huggingface/diffusers (ShivamShrirao/diffusers) and I've been running this perfectly fine with my own finetuned Google Colab notebook.

Is there someone who can tweak this repo so it can run on 16GB VRAM as well? Or, since I'm especially interested in @kanewallmann's addition for support for multiple concepts & classes (kanewallmann/Dreambooth-Stable-Diffusion, is there someone who can port that feature to huggingface/diffusers (ShivamShrirao/diffusers)?

Related :

@djbielejeski
Copy link
Collaborator

Shivam's repo is diffuser based, not sure what it would take to get some of those optimizations in this repo.

@jslegers
Copy link
Author

@djbielejeski :

So you just close the issue as "completed"? That's not very helpful / useful...

So, I guess this version of Dreambooth is not diffusers based? Could you enlighten us on how it is implemented, then? Maybe this could shed a light on potential optimisations...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants