Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How long will the inference of one out-of-distribution scene take #11

Closed
elenacliu opened this issue Aug 7, 2023 · 5 comments
Closed

Comments

@elenacliu
Copy link

besides issue #9, I wonder how long the inference of one out-of-distribution scene will take

@MohammadJohari
Copy link

Hi,

For an 800x800 resolution image, it takes approximately 2 minutes to render a novel view on an NVIDIA-RTX 3090 GPU. This is for the default settings which uses 9 input images.

@basit-7
Copy link

basit-7 commented Oct 29, 2023

Hello @MohammadJohari,

1- Does Table 1 in your paper shows results for 9 input source images for all the methods (MVSNeRF, IBRNet)?

2- Is there a way to achieve faster inference?

@MohammadJohari
Copy link

Hello,

  1. No, Table 1 shows the results from their original paper/code with their public pre-trained models.
  2. The simplest hack to reduce the inference time is reducing number of input sources from 9 to something like 5 or 6. Most of the computation time is related to the attention blocks, which have a strong correlation with the number of input sources.

Kind regards,

@basit-7
Copy link

basit-7 commented Oct 29, 2023

Thanks for the quick response.

Does reducing the source images to a lower number significantly affect the quality of rendered output?

@MohammadJohari
Copy link

I would refer you to Section 4.2 and Table 2 of our paper to get some insights.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants