Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Insufficient hash resolution harming performance #23

Closed
xmk2222 opened this issue Mar 11, 2022 · 6 comments
Closed

Insufficient hash resolution harming performance #23

xmk2222 opened this issue Mar 11, 2022 · 6 comments

Comments

@xmk2222
Copy link

xmk2222 commented Mar 11, 2022

I noticed that the desired_resolution of hash encoding is fixed in torch-ngp:

self.encoder, self.in_dim = get_encoder(encoding)

num_levels=16, level_dim=2, base_resolution=16, per_level_scale=1.3819, log2_hashmap_size=19, desired_resolution=2048,

While in instant-ngp, it is a parameter controlled by aabb_scale. When aabb_scale == 1, the desired_resolution = 2048, and when aabb_scale increases to 16, the desired_resolution increases to 32768. (The per_level_scale changes, too.)

That proved to be a huge gap on performance (also memory) in my trial, which is on a rather big scene than lego.

@ashawkey
Copy link
Owner

Thanks for reporting this. According to this line, desired_resolution should be linearly scaled with aabb_scale (bound in this implmentaiton).
Another problem is that we don't have a multi-scale density grid, and exponential ray marching is not supported, which may lead to worse performance too.
However, currently I don't have a dataset large enough to test these changes, could you share the dataset name if it is openly available?

@aoliao12138
Copy link

NHR provides a multiview dataset for humans.

@ashawkey
Copy link
Owner

@xmk2222 I have added the scaling in 086b15c, but on a naive test with lego (bound=16, scale=10), the results are still blurry. Maybe you could test on your dataset again?

@xmk2222
Copy link
Author

xmk2222 commented Mar 14, 2022

@ashawkey Thanks! That is exactly what I meant. I will test it later.

However, I think there are some differences still:
In instant-ngp, the condition is:

  • aabb_scale: 16
  • desired_resolution: 32768,
  • bounding_box: [-7.5, 8.5];

But in torch-ngp:

  • bound: 16
  • desired_resolution: 32768 (same as in i-ngp)
  • bounding_box: [-16, 16] which is different from the scale of in i-ngp.

The corresponding code in instant-ngp:

https://github.com/NVlabs/instant-ngp/blob/692ce14f4f154c03c5784a944c625da94b08e292/src/testbed_nerf.cu#L2293:L2294

In my trial, the hash encoding has some tolerence in scaling, but maybe it would be better to strictly align with i-ngp, at least in scaling?

@ashawkey
Copy link
Owner

@xmk2222 You are right, this is a difference from instant-ngp. However, although we assumes the scene is twice larger in length, we also twice the min step size in ray marching here. So it shouldn't affect much as long as bound is the same as aabb_scale.

@ashawkey
Copy link
Owner

As the cascaded density grid has been implemented, I'll close this for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants