Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does your code work on NERF synthetic dataset ? #6

Closed
phongnhhn92 opened this issue Apr 14, 2020 · 10 comments
Closed

Does your code work on NERF synthetic dataset ? #6

phongnhhn92 opened this issue Apr 14, 2020 · 10 comments

Comments

@phongnhhn92
Copy link

Hi, I have noticed that the original NERF paper have trained and tesed their methods on synthetic dataset (cars,drum,...) so I wonder if your code works on those datasets ?

@yenchenlin
Copy link
Owner

yenchenlin commented Apr 14, 2020

Hello, for now, the answer is YES and NO. I am able to get it to work on ship, hotdog, and material by copying the config for lego. Results attached:

ezgif com-video-to-gif (8)
ezgif com-video-to-gif (9)
ezgif com-video-to-gif (10)

However, it currently doesn't work for drums. I think we can debug further if the authors released the training configs for all the scenes. Now we only can access configs for lego and fern.

@phongnhhn92
Copy link
Author

I didnt run your code on drums but I doubt that images in drums set has transparent background. If you check images from other sets then they are all having white background. Why dont you use this command and convert those drums image with white background :convert image.png -background white -alpha remove -alpha off white.png

I hope it works ^^

@bmild
Copy link

bmild commented Apr 15, 2020

I've just added the original configs for each dataset type here:
https://github.com/bmild/nerf/tree/master/paper_configs
We used the same settings across all scenes within one dataset type.

@yenchenlin
Copy link
Owner

thank you @bmild , I will find times to re-run these experiments.

@yenchenlin
Copy link
Owner

yenchenlin commented Apr 15, 2020

BTW, @bmild can you comment on what @phongnhhn92 just said? My intuition is that as long as white_bkgd = True is set, no further data pre-processing (e.g., changing images' background) is needed.

Additionally, what's the intuition of no_batching = True for synthetic dataset? My understanding is that batching it should give us better gradients with less variance as each data point in a batch is more independent.

@bmild
Copy link

bmild commented Apr 15, 2020

All the blender datasets are stored four channel RGBA images, so the white_bkgd = True flag should be sufficient, yes (it takes care of changing the background).

no_batching here is just a matter of memory consumption, since the batching implementation is very simple, instantiating the camera ray for every pixel in the dataset in one very very large array and then shuffling. In practice we found that our learning rate was small enough that it didn't make a noticeable difference.

@yenchenlin
Copy link
Owner

@bmild thanks for the answer!

@bmild
Copy link

bmild commented Apr 16, 2020

Quick note: if you are seeing problems with the training diverging to only render a white background in the early stages (within 1000 iters), see my comments in this issue.

@yenchenlin
Copy link
Owner

yenchenlin commented Apr 17, 2020

Hello @bmild , thanks for the update. I tried it and it did solve the "only rendering white background" issue at 1000 iters. Will test it out further.

@yenchenlin
Copy link
Owner

@phongnhhn92 @bmild following are results trained with 10k iterations, will close the issue.

ezgif com-resize(1)
ezgif com-resize(2)
ezgif com-resize(3)
ezgif com-resize(4)

HinataAoki pushed a commit to HinataAoki/nerf-with-heightdata that referenced this issue Aug 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants