New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA out of memory error #2
Comments
Yes, so 6G of VRAM is going to be quite low for MetaNLR++ (we trained on GPUs with either 24G or 48G). The simplest place to start is to decrease the size of models, especially the image encoder / decoder (the MLP is not that large in comparison). Operating on lower resolution images is also a possibility to significantly reduce the computational overhead; you can change this parameter (load_im_scale) in the config to automatically load in the DTU at a lower resolution. |
I'll leave it open in case anyone else has this same question and would like to see the discussion. |
I have a question |
Hello :)
When running
python experiment_scripts/train_sdf_ibr.py --config_filepath configs/nlrpp_dtu.txt
I run out of memory quickly. If I understand correctly, I am running on a GeForce RTX 2080 SUPER GPU with about 6000MiB available. Is there a way to reduce the batch_size? I think the dataloader in train_sdf_ibr.py already has a batch_size of 1. Thank you in advance!!The text was updated successfully, but these errors were encountered: