Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce precision conversion when packing #124

Conversation

thomasw21
Copy link
Contributor

@thomasw21 thomasw21 commented Nov 24, 2022

Assume that:

  • ray_indices are always Long. Indices are always long in torch. This makes indexing much easier.
  • packed_info are always Int.

Depending on if you agree, we could enforce this convention below as well (in cuda kernels)

@liruilong940607
Copy link
Collaborator

Thanks Thomas!! I always wanted to clean this up but didn't find the chance to do it.

I think it makes sense to enforce this conversion inside the cuda kernels but I'm also find with your current way in python.

The test failed because of the format issue. You can fix it with python scripts/run_dev_checks.py

@thomasw21
Copy link
Contributor Author

Cool I updated the kernels to support that convention, I guess we can also have a conversion whether packed_info in int32 is safe? It feels like number of samples can be quite large, over 2B ...

@thomasw21
Copy link
Contributor Author

Also one a few tests are broken due to the removal of ray_pdf_query in 55acb15

@liruilong940607
Copy link
Collaborator

Sorry for the late reply. Got distracted by a traveling.

Thanks for the update. You raised up a good point about the #samples. In the example codes I was using up to 2^20 samples, and it is far from reaching to GPU memory limit. I didn't test but likely 2^32 is feasible with a small network. In any case, I agree int64 is more safe for packed_info.

I'm going to merge this PR now and feel free to open a new one if you want.

@liruilong940607 liruilong940607 merged commit eecfd44 into nerfstudio-project:master Dec 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants