-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NaN in training #17
Comments
Hi, I have encountered NaN during training in the following situations, and can be avoided by using corresponding methods:
|
Solved! Thank you for your advice! |
Hi, what kind of data did you use? Is it similiar to ZJU-Mocap (Multi-view dynamic dataset, including mask information)? |
I use 4 cameras to record and extract images (1280x720, padding to 1280x1280 and resize to 1024x1024). I also use a segmentation model to get the mask information. I set the train input view in the yaml and the length of source view in enerf.py to avoid out of range. At first I think the calibration from Easymocap is not correct. But I use Matlab to double check it and the two results are close. |
Try to use more source views, (one input view is not enough). I have some other suggestions:
|
I followed your advice and tried, but the result seems the same. Both the PSNR and the visualization are still bad.
Thank you so much for your help these days! |
Hi, when I trained my own dataset, error occurred as below:
I set 'shuffle' to False to check if some particular images in my dataset cause this error, but it still occurs randomly (mostly in the first epoch, but it occurred once in the second epoch while the first epoch seem good).
Do you have any idea? Thank you for your help!
The text was updated successfully, but these errors were encountered: