New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crash at shape pre-training #28
Comments
Try reducing the MLP chunk size? That should solve your OOM problem. Also, consider trying out https://github.com/nerfstudio-project/nerfstudio as a drop-in replacement for NeRFactor's "Shape Pre-Training." nerfstudio will be much faster, and NeRFactor doesn't care who generated the surface points. |
Why dose MLP chunk size affect cpu memory? It seems that mlp chunk size is only about GPU memory. |
hi! I have encountered the same problem. How did you solve it? @Woolseyyy |
I set cache=False for validation and set shuffle_buffer_size=False |
Does this setting affect the overall experiment? |
modify the code to make it only affect validation |
I found no shuffle_buffer_size in shape.ini, do I need to add shuffle_buffer_size = 0, or no_shuffle = True in modify base.py? |
I am trying to reproduce the results but meet some problems at 'I. Shape Pre-Training'. I find the script would crash at validation of shape pre-training. It looks like a OOM issue because log says "killed" and it stop crashing if I set
shuffle_buffer_size=False
at shape.ini. Any suggestions would help!I am using a machine with 4 3090 GPUs, 12 cpu cores and 60 GB memory. My dataset have 100 train data and 7 validate data. There are 120 test data, 99 train data, 99 val data at
surf_root
directory.The text was updated successfully, but these errors were encountered: