Simple and fast Colab version, fast-stable-diffusion, +25% speed increase + memory efficient #870
Replies: 4 comments 47 replies
-
Appears to be broken, upon image generation it errors out indicating CUDA Error: No kernel image is available for execution on the device Std Out log below: 0% 0/20 [00:03<?, ?it/s] |
Beta Was this translation helpful? Give feedback.
-
Hi. Thank you for your optimisation implementation. I don't use Gdrive for my SD, I only use One Drive from Microsoft. Is there any simple way to install it in a specific folder where I have my AUTOMATIC's repo, while keeping all the models intact (which was already downloaded)? Thanks. |
Beta Was this translation helpful? Give feedback.
-
Sans gdrive: https://github.com/t-e-r-m/Portable-Stable-Diffusion-Notebook |
Beta Was this translation helpful? Give feedback.
-
HI, i tried your colab, i ran all cells, i just imported the model.cpkt by moving the one i already got on my own automatic version on my drive. But when i run the last cell i got this issue: When i want to rerun, do it need to run all cell again every time? |
Beta Was this translation helpful? Give feedback.
-
Colab adaptation for AUTOMATIC1111 Webui version of stable diffusion implementing the optimization suggested by https://github.com/MatthieuTPHR : huggingface/diffusers#532, using the MemoryEfficientAttention implementation from xformers (cc. @fmassa, @danthe3rd, @blefaudeux) to both speedup the cross-attention speed and decrease its GPU memory requirements.
All you have to do is enter your huggingface token only once and you're all set, the colabs will install the repos and the models inside Gdrive, so the loading will be fast everytime you use it, enjoy !!
from : https://github.com/TheLastBen/fast-stable-diffusion
Beta Was this translation helpful? Give feedback.
All reactions