New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: CUDA out of memory. #19
Comments
Hi, there are four stages in our processing pipeline. Which stage do you come across this issue? |
first stage |
@grenaud try to use NVtop, just to get a full view what is using your GPU and how much resource is using |
amazing tool! thank you. It seems the program tries to allocate all of my 5.8G of memory. My OS plus application take very little memory. But thank you for the suggestion! |
I just ran into the same issue, for me it was - I was able to fix with the following steps:
I would still love to be able to process full resolution pictures if anyone has a solution. |
Facing same issue Machine
Command
Error
|
same error: |
|
A simply walkaround is to reduce your image size > 640x480. Evenmore if you are trying to use '--with-scratches' this will increase dramatically use of GPU memory. But it is a pitty to have to downscale your photo. With DeOldify it never happens. |
You can also use 14GB of GPU in google colab: |
The collapse happening on the step when it fits the image into the UNET model: I didn't manage to process big images, but it's working in the colab with images whose sizes are ~700px. |
I attempted with subscription and v100 tesla:
And still the same issue (even without option --with_scratch):
How to solve it? I dont want to downscale images...it works with small images, but it is not right, we fix one thing but making worse another. |
Hello. 16 GB 32 GB But in both cases we see "Tried to allocate 13.41 GiB" |
I also got the same error while executing Stage 1 with Input Image Size: 2414*3632 Running Stage 1: Overall restoration However, it worked when I reduced the longer side of the image to 640 pixels, while maintaining same aspect ratio. Really appreciate the work of the authors. This is indeed an amazing tool, for hobbyist photographers and ML enthusiasts. |
RTX 3070 has the same error ! |
NVtop doesn't seem to work in windows or windows ubuntu WSL. I'll try later when I boot in ubuntu. |
I'm new to all of this, but I'm wondering if some of the Tensors could be moved to the CPU/ System Memory while other Tensors are processes through the inference ( forward ) flow of the network? To get debug working in pycharm, I've had to rewrite some of the code to replace I've also tried to get this all working in a virtual environment
I wanted to get this working on my system, but was willing to upgrade one of my GPUs if it would get passed the memory issues, but it looks like maybe that won't help. Has anyone else dug into the code? A few of us in this ML Discord group have been trying to hack on this code, with minimal luck. https://discord.gg/2Qq259hE |
I'm running this on a ubuntu 20.04 system with a Titan V and and a 3070 (ontop Lambda Stack) and run into the same issue described. This seems to be one fo the biggest things holding this project back and I wish I could help more. I'm getting ready to explore deeper to see if I can find some hanging fruit to reduce the amount of memory consumed at runtime. @jDavidnet that discord link is no longer active. Would you be so kind to send a new one? I too am interested in trying to resolve this and more than happy to help where I can :) |
https://discord.gg/9hGqmt88
My wife just gave birth to our second child. I’m happy to chat with someone knowledgeable about my findings or to do a screen share session in maybe a week or two when things settle down.
…Sent from my iPhone
On Jun 23, 2021, at 2:51 PM, Nick Harvey ***@***.***> wrote:
I'm running this on a ubuntu 20.04 system with a Titan V and and a 3070 (ontop Lambda Stack) and run into the same issue described. This seems to be one fo the biggest things holding this project back and I wish I could help more. I'm getting ready to explore deeper to see if I can find some hanging fruit to reduce the amount of memory consumed at runtime.
@jDavidnet that discord link is no longer active. Would you be so kind to send a new one? I too am interested in trying to resolve this and more than happy to help where I can :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
🥳 Congrats on your newborn! Definitely take some time and enjoy that time with your your family. And thanks for the discord link. In the meantime, I tested @mebelz item #1 (changed |
Hello, we just redesign the network to support high-resolution images. Welcome to have a try. You can run the code with arguments:
|
I get the following error:
RuntimeError: CUDA out of memory. Tried to allocate 88.00 MiB (GPU 0; 5.80 GiB total capacity; 4.14 GiB already allocated; 154.56 MiB free; 4.24 GiB reserved in total by PyTorch)
Is there a way to allocate more memory? I do not get why 4.14Gb are already allocated.
The text was updated successfully, but these errors were encountered: