-
-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss did not drop when training the blender dataset #75
Comments
Some blender scenes struggle to converge from the beginning because of the issue bmild/nerf#29 |
Thank you! That is really helpful! Could you provide your training settings for the blender scenes with a lot of whitespace such as |
In my experiments, I managed to train with many trials until I get lucky initializations that converges, i.e. if the initialization is bad and the loss doesn't decrease, I just stop and train it again and again. Since I only wanted to test my code, I only trained on a few scenes, not all of them. More specifically, I only trained |
@kwea123 Yes, softplus makes it stable during training. However, several scenes, like drums, mic, and ficus still cannot convergence. I think the reason might be the small area of the foreground. Could you validate the observation? Thank you! |
Hi,
I am using your provide Colab code to training my won data.
At first, I used the LLFF to extract the camera poses of images. Your code can output a wonderful result!
Then, I was trying to use blender to generate the ground true poses as transforms.json. I separated my dataset to train set (200 images) and val set (100 images) with transforms_train.json and transforms_val.json. However, this time your colab code cannot work. At this time, I was thinking maybe I generate wrong transforms.json file. However, when I test with mic dataset from nerf_synthetic, it still cannot work.
Your colab code only has 360 inward-facing scene and Forward facing scene. I am adding a new block of code so it can run the blender scene.
A part of the mic scene training log is shown below:
Epoch 1: 100% 3907/3915 [11:59<00:01, 5.43it/s, loss=0.093, train_psnr=12.5, v_num=2]
Validating: 0it [00:00, ?it/s]
From the log you can see both loss and val_loss are not decreasing, and I cannot extract any mesh from the model.
As you said you managed to train on all provided blender scenes , could you tell me where I got wrong?
Many thanks!
The text was updated successfully, but these errors were encountered: