-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Models turn to swiss cheese after >5000 iters #14
Comments
Training should already used shuffled order of the training images, unless you have changed that in your fork If it looks good up to 5k iterations, it seems to be a problem in the second phase of training (where we switch from volumetric texturing w/ MLP, to standard 2D textures, and only optimize vertex positions, not topology). If you see divergence after the first part of training, one option may be to lower the learning rate in the second pass. Change from Also, if your GPU memory allows, I would recommend to run with batch size 4 or 8 to get less noisy gradients and better convergence. |
There seems to be an issue with the dataset. There are just 81 elements in the Note that disabling the assert means that the poses will be mapped to the first 81 image/mask files in the order enumerated by glob.glob(). If you have removed any other images (and I don't see any obvious problems in the last 7 images), the poses will be out of sync causing corruption. I note that the error convergence is quite erratic and cyclical, which could indicate bad poses.
There are no pose <-> image path links in the LLFF format, so there's unfortunately a direct ordering dependency between the image files and the .npy file. |
that would be what precisely caused it, to match with this merge: Fyusion/LLFF#60 |
sdf_regularizer seems like it can be increased far higher than I expected, although I think blown out or near-white textures causes issues (the same places where colmap can't map features seem to be where the mesh fails). A higher laplace_scale and lock_pos: true helps, lock_pos just preventing deviation (or occasional destruction of the model at lower loss) between the dmnet and final pass. |
What loss is supposed to control regularity? I've been focusing on this relatively simple model to see what sort of values works well in training for handheld video, and at least the lefthand outer image looks like it fits the images until 5k iters (1 batchsize) into training, the actual model looks very irregular, and finally collapses near the end of training tests.
the colmap (both exhaustive and sequential) tracking look like they map accurately, the paper describes a loss to solve this issue, but I don't know if the total loss must be reduced due to batchsize, or the loss for model regularity must be increased (which I dont know the config line for).
30% of the dataset images have been manually removed due to motion blur and being too far off the edges, which have helped in early training, and slight reduction in early collapses.
I have included the dataset and the key iters that start collapsing (0-1000 iters, 4000-6000 iters) and the final models in the zip below (updated with more compressed iter images):
https://files.catbox.moe/3v79c6.zip
Is the training scheme running through the dataset sequentially, and therefore the final iters failing due to the images at the end? Both passes seem to fail at near end of the session. If so, then randomizing the images (if captured from video) would spread the bad frames out, instead of destroying the model at the end of training.
The text was updated successfully, but these errors were encountered: