-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOM #9
Comments
Hi, thanks for your interest. |
Yes Although I don't have many test cases, I believe that using this script can yield reasonable super-resolution results in the vast majority of cases, with almost no boundary artifacts The only problem now is that the inference time is too long. The 4K example above took more than an hour in total,is there any optimization space in the inference time, or if I mistake used your script? |
The inference time can be very long for large resolutions and sometimes we observe boundary artifacts, it depends on the content. BTW, another thing you can try if you are interested is to add multi-GPU support. Although the batch size is 1, since we divide the image into multiple tiles, it is still possible to deal with them separately. Just make sure they are under the same seed. |
I will try decrease the sampling steps, undoubtedly, this will accelerate inference, This is the SR method that I have seen that can support any input size、any upscale、suitable for images such as wild and AIGC, and has almost the best effect. Thanks again for your amazing work. I will continue to pay attention to this issue. |
Amazing project!!
I used a 1024 * 800 image and executed the following command:
python sr_val_ddpm_text_T_vqganfin_oldcanvas.py --ckpt ckpt/stablesr_000117.ckpt --vqgan_ckpt ckpt/vqgan_cfw_00011.ckpt --init-img inputs/test_example/ --outdir output --ddpm_steps 200 --dec_w 0.5
By default, I hope to obtain an output with a resolution of 4K, but I got:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 55.62 GiB (GPU 0; 79.19 GiB total capacity; 31.25 GiB already allocated; 14.34 GiB free; 39.41 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
My xformers have been installed correctly:
and this is my test image:
The text was updated successfully, but these errors were encountered: