Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I'm getting out of memory on cuda, and I want to specify the tile size, but I need to know what code to insert where! #25

Open
tom21001112 opened this issue Aug 1, 2022 · 1 comment

Comments

@tom21001112
Copy link

I'm using the animation kit-AI on google colab and cuda is running out of memory, I want to specify the tile size, please tell me where to insert the code! -I want to specify the tile size, but I need to know where to insert the code! What code do I put it in?
↓URL
https://colab.research.google.com/github/sadnow/AnimationKit-AI_Upscaling-Interpolation_RIFE-RealESRGAN/blob/main/AnimationKit_Rife_RealESRGAN_Upscaling_Interpolation.ipynb#scrollTo=MhMORNgduDyt

Error message "
Error CUDA out of memory.Tried to allocate 7.91 GiB (GPU 0; 14.76 GiB total capacity; 2.51 GiB already allocated; 2.81 GiB free; 10.90 GiB reserved in total by PyTorch) If reserved memory is more than allocated memory, try setting max_split_size_mb to prevent fragmentation. See the documentation on memory management and PYTORCH_CUDA_ALLOC_CONF.
If CUDA runs out of memory, try setting -tile to a smaller number.
Test 428 A00429
"

@AIManifest
Copy link

You would insert

I'm using the animation kit-AI on google colab and cuda is running out of memory, I want to specify the tile size, please tell me where to insert the code! -I want to specify the tile size, but I need to know where to insert the code! What code do I put it in? ↓URL https://colab.research.google.com/github/sadnow/AnimationKit-AI_Upscaling-Interpolation_RIFE-RealESRGAN/blob/main/AnimationKit_Rife_RealESRGAN_Upscaling_Interpolation.ipynb#scrollTo=MhMORNgduDyt

Error message " Error CUDA out of memory.Tried to allocate 7.91 GiB (GPU 0; 14.76 GiB total capacity; 2.51 GiB already allocated; 2.81 GiB free; 10.90 GiB reserved in total by PyTorch) If reserved memory is more than allocated memory, try setting max_split_size_mb to prevent fragmentation. See the documentation on memory management and PYTORCH_CUDA_ALLOC_CONF. If CUDA runs out of memory, try setting -tile to a smaller number. Test 428 A00429 "

You would insert -tile 256 to !python inference _realesrgan.py -tile 256 all you’re inserting is -tile 256 leave the rest of the code as is. Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants