-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How long will the inference of one out-of-distribution scene take #11
Comments
Hi, For an 800x800 resolution image, it takes approximately 2 minutes to render a novel view on an NVIDIA-RTX 3090 GPU. This is for the default settings which uses 9 input images. |
Hello @MohammadJohari, 1- Does Table 1 in your paper shows results for 9 input source images for all the methods (MVSNeRF, IBRNet)? 2- Is there a way to achieve faster inference? |
Hello,
Kind regards, |
Thanks for the quick response. Does reducing the source images to a lower number significantly affect the quality of rendered output? |
I would refer you to Section 4.2 and Table 2 of our paper to get some insights. |
besides issue #9, I wonder how long the inference of one out-of-distribution scene will take
The text was updated successfully, but these errors were encountered: