Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to perform training with our own dataset ? Will it produce same results with drone captured images ? #31

Closed
piyushsingh2k7 opened this issue May 19, 2022 · 1 comment

Comments

@piyushsingh2k7
Copy link

No description provided.

@piyushsingh2k7 piyushsingh2k7 changed the title how to perform training with our own dataset how to perform training with our own dataset ? Will it produce same results with drone captured images ? May 19, 2022
@jmunkberg
Copy link
Collaborator

We need accurate camera poses and good foreground segmentation masks for high quality results. Avoid motion blur, defocus blur and changing lighting conditions (we assume a constant lighting and optimize a single environment map from the images).

To prepare your own dataset, you can mostly follow https://github.com/bmild/nerf#generating-poses-for-your-own-scenes but please note that we need masks in the alpha channel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants