You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to run the evaluation on scan114 only (have not had the space to download the other datasets yet). However, I have encountered a CUDA out of memory runtime error as shown, after running the command python run.py --type evaluate --cfg_file configs/enerf/dtu_pretrain.yaml enerf.cas_config.render_if False,True enerf.cas_config.volume_planes 48,8 enerf.eval_depth True:
load model: /home/ENeRF-master/trained_model/enerf/dtu_pretrain/latest.pth
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: /home/anaconda3/lib/python3.9/site-packages/lpips/weights/v0.1/vgg.pth
0%| | 0/4 [00:03<?, ?it/s]
Traceback (most recent call last):
File "/home/ENeRF-master/run.py", line 111, in <module>
globals()['run_' + args.type]()
File "/home/ENeRF-master/run.py", line 70, in run_evaluate
output = network(batch)
File "/home/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "lib/networks/enerf/network.py", line 96, in forward
ret_i = self.batchify_rays(
File "lib/networks/enerf/network.py", line 49, in batchify_rays
ret = self.render_rays(rays[:, i:i + chunk], **kwargs)
File "lib/networks/enerf/network.py", line 40, in render_rays
net_output = nerf_model(vox_feat, img_feat_rgb_dir)
File "/home/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ENeRF-master/lib/networks/enerf/nerf.py", line 40, in forward
x = torch.cat((x, img_feat_rgb_dir), dim=-1)
RuntimeError: CUDA out of memory. Tried to allocate 774.00 MiB (GPU 0; 23.70 GiB total capacity; 1.13 GiB already allocated; 321.56 MiB free; 1.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I have tried to include os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:512" at the beginning of the run.py file, however, I have received the exact same error. Any suggestion on how I should resolve this error?
Thank you!
The text was updated successfully, but these errors were encountered:
Hi, thanks for sharing your great work!
I am trying to run the evaluation on scan114 only (have not had the space to download the other datasets yet). However, I have encountered a CUDA out of memory runtime error as shown, after running the command
python run.py --type evaluate --cfg_file configs/enerf/dtu_pretrain.yaml enerf.cas_config.render_if False,True enerf.cas_config.volume_planes 48,8 enerf.eval_depth True
:I have tried to include
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:512"
at the beginning of the run.py file, however, I have received the exact same error. Any suggestion on how I should resolve this error?Thank you!
The text was updated successfully, but these errors were encountered: