You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was trying to train the model of the owl figure data by python train.py /home/mea303/Documents/Python/NeRF_project/unisurf-main/configs/DTU/scan0122.yaml
but ran into the error message: Traceback (most recent call last): File "/home/mea303/Documents/Python/NeRF_project/unisurf-main/train.py", line 112, in <module> loss_dict = trainer.train_step(batch, it) ... File "/home/mea303/Documents/Python/NeRF_project/unisurf-main/model/network.py", line 112, in gradient gradients = torch.autograd.grad( File "/home/mea303/anaconda3/envs/unisurf/lib/python3.10/site-packages/torch/autograd/__init__.py", line 275, in grad return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 7.79 GiB total capacity; 2.13 GiB already allocated; 23.94 MiB free; 2.20 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I'm using RTX 3060 Ti with cuda version 11.3 on Ubuntu 20.04
Is that graphic card not good enough? How do I fix this problem?
The text was updated successfully, but these errors were encountered:
I was trying to train the model of the owl figure data by
python train.py /home/mea303/Documents/Python/NeRF_project/unisurf-main/configs/DTU/scan0122.yaml
but ran into the error message:
Traceback (most recent call last):
File "/home/mea303/Documents/Python/NeRF_project/unisurf-main/train.py", line 112, in <module> loss_dict = trainer.train_step(batch, it)
...
File "/home/mea303/Documents/Python/NeRF_project/unisurf-main/model/network.py", line 112, in gradient gradients = torch.autograd.grad(
File "/home/mea303/anaconda3/envs/unisurf/lib/python3.10/site-packages/torch/autograd/__init__.py", line 275, in grad return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 7.79 GiB total capacity; 2.13 GiB already allocated; 23.94 MiB free; 2.20 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I'm using RTX 3060 Ti with cuda version 11.3 on Ubuntu 20.04
Is that graphic card not good enough? How do I fix this problem?
The text was updated successfully, but these errors were encountered: