Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory error #2

Open
svsambandam opened this issue Feb 8, 2022 · 3 comments
Open

CUDA out of memory error #2

svsambandam opened this issue Feb 8, 2022 · 3 comments

Comments

@svsambandam
Copy link

Hello :)

When running python experiment_scripts/train_sdf_ibr.py --config_filepath configs/nlrpp_dtu.txt I run out of memory quickly. If I understand correctly, I am running on a GeForce RTX 2080 SUPER GPU with about 6000MiB available. Is there a way to reduce the batch_size? I think the dataloader in train_sdf_ibr.py already has a batch_size of 1. Thank you in advance!!

@alexanderbergman7
Copy link
Owner

Yes, so 6G of VRAM is going to be quite low for MetaNLR++ (we trained on GPUs with either 24G or 48G). The simplest place to start is to decrease the size of models, especially the image encoder / decoder (the MLP is not that large in comparison). Operating on lower resolution images is also a possibility to significantly reduce the computational overhead; you can change this parameter (load_im_scale) in the config to automatically load in the DTU at a lower resolution.

@alexanderbergman7
Copy link
Owner

I'll leave it open in case anyone else has this same question and would like to see the discussion.

@ghost
Copy link

ghost commented May 16, 2022

I have a question
I have eight gpus and each 32G memory, also have the error "RuntimeError:CUDA out of memory, thied to allocate 352.00MB(GPU 0; 31.75GiB total capacity:29.93GiB already allocated;83.69MiB free; 30.19GiB reserved in total by Pytorch)",
It look like doesn't using other Gpus
@alexanderbergman7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants