-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about prepocess #56
Comments
Hi, we also simply resize the monocular outputs to 1200x1200 with padding for dtu images with 1200x1600. You could check it here: https://github.com/autonomousvision/monosdf/blob/main/preprocess/paded_dtu.py. |
I still have some problems, is my method more convenient than the way in thepaded_dtu.py, because there is no need to modify the parameters of the camera? |
You could just try it out. |
How many experiments are averaged for the CD value on the DTU dataset reported in the paper? |
It's average over 15 scenes. |
Helllo! I'd like to ask a question. |
yes, as long as the length and width are multiples of 384. |
Thank you very much for your reply! |
Hello! I've got question here. |
Hi, omnidata is not trained on large resolution images. So it's not clear whether it can generalise in this case and the reconstruction results might vary scene by scene. |
Hi,
Thanks for the great work !
because 384 is the size for Omnidata-model, the dtu image size is 1200x1600, if I want to use monocular cues with original size, can I first resize the 1200x1600 -> 1152x1536, then get the monocular cues and upsamle them to 1200x1600?
looking forward to your reply!
The text was updated successfully, but these errors were encountered: