New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LDM Running out of memory #15
Comments
Add 61fcffd#diff-b10564ab7d2c520cdd0243874879fb0a782862c3c902ab535faabe57d5a505e1R78 |
When I tested on 3090 (24G), I found that the peak GPU memory usage still easily exceeded 8G |
Another option is to use fp16 for inference, but this requires reconverting the torchscript model, and the current model reports an error when using fp16. |
If anyone is interested, the code for converting torchscript model is here. |
Cleaning the cache is definitely helping. The app hasn't crashed so far. Now I'm trying to see if there's a way to increase the ldm mode to be faster. |
the new model is strange, on second stroke it converts even area of first stroke, there must be some typo in the code. |
That's correct. Can confirm that this happens. |
I does get nicer results tho, first model cant detect circular patterns/ovals , it works nice with perspective ines and straight patterns, , would be great to have it trained for circles too like the new model which can look more artificial sometimes than first but it creates new details and results are very impressive but its one stroke only, second stroke just doesnt work very often on gtx 1080ti 12gb so i make sure i get all areas with one stroke . |
How we can change models to the one sdownloaded from https://github.com/CompVis/latent-diffusion or https://disk.yandex.ru/d/EgqaSnLohjuzAg |
|
I understand that the LDM model is intensive but I think something peculiar is happening. It always OOM's on the 3rd stroke. I tried lowering the steps but it doesn't matter. No matter what the step size is or the stroke size is, the model crashes on the 3rd stroke with an OOM.
Wondering if there might be a fix for that? I'm on a RTX3080.
The text was updated successfully, but these errors were encountered: