Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does all sampling positions need be fed into the NeRF model after using visual hull? #4

Closed
DRosemei opened this issue Jan 10, 2022 · 5 comments

Comments

@DRosemei
Copy link

Thanks for your great work. I want to know whether you eliminate the possitions which are not in visual hull or not before you put them into NeRF model. If you eliminate them, there will be less positions for computing. But this will lead to different number of positions for each ray. So I want to know how you solve this problem?

@DRosemei
Copy link
Author

Thanks for your great work. I want to know whether you eliminate the possitions which are not in visual hull or not before you put them into NeRF model. If you eliminate them, there will be less positions for computing. But this will lead to different number of positions for each ray. So I want to know how you solve this problem?

More specifically, what is the use of this line of code?

@naruya
Copy link
Owner

naruya commented Jan 11, 2022

Thanks for your question!

In VaxNeRF, we basically reduce the number of sampling points as much as possible.
But if you do it in the normal way with Jax, it won't work, because as you know the total number of sampling points in each mini-batch will always change, so Jax need to recompile every time, and that will soon cause memory errors.

So, in VaxNeRF, we automatically find out the number of samples (len_inpc / len_inpf) that is just slightly larger than the total number of samples in any mini-batch (That's the code you shared). These numbers are automatically found out in the first 500 train steps in this and this and this.

len_c = jnp.sum(mask)

@DRosemei
Copy link
Author

Thanks for your question!

In VaxNeRF, we basically reduce the number of sampling points as much as possible. But if you do it in the normal way with Jax, it won't work, because as you know the total number of sampling points in each mini-batch will always change, so Jax need to recompile every time, and that will soon cause memory errors.

So, in VaxNeRF, we automatically find out the number of samples (len_inpc / len_inpf) that is just slightly larger than the total number of samples in any mini-batch (That's the code you shared). These numbers are automatically found out in the first 500 train steps in this and this and this.

len_c = jnp.sum(mask)

Thanks for your reply! Now I want to know what "len_inpc"、"len_inpf"、“len_c ”、“len_f”、“ind_inp” and "ind_bak" mean. This is why I could not get your idea fully. I would appreciate it if you could give some explanation :)

@naruya
Copy link
Owner

naruya commented Jan 11, 2022

Sorry for the confusion. Yes, sure.

  • ind_inp: Indices of the sampling points of the mini-batch that are actually fed into the MLP, including dummy sampling points to adjust the number of sampling points.
  • ind_bak: Indices of the sampling points of the mini-batch that are not fed into the MLP.

I used ind_inp and ind_bak to restore the output of the MLP to the size and the order of the original mini-batch.

  • len_inpc, len_inpf: Number of sampling points actually fed into the MLP. These are just slightly larger than the total number of points inside of the voxel (bounding volume) in any mini-batch. Since the sampling strategy is different for coarse and fine, we need to prepare these two variables.

  • len_c, len_f: Number of points inside of the voxel out of all sample points in the mini-batch. These are only used to decide len_inpc and len_inpf.

@DRosemei
Copy link
Author

Sorry for the confusion. Yes, sure.

  • ind_inp: Indices of the sampling points of the mini-batch that are actually fed into the MLP, including dummy sampling points to adjust the number of sampling points.
  • ind_bak: Indices of the sampling points of the mini-batch that are not fed into the MLP.

I used ind_inp and ind_bak to restore the output of the MLP to the size and the order of the original mini-batch.

  • len_inpc, len_inpf: Number of sampling points actually fed into the MLP. These are just slightly larger than the total number of points inside of the voxel (bounding volume) in any mini-batch. Since the sampling strategy is different for coarse and fine, we need to prepare these two variables.
  • len_c, len_f: Number of points inside of the voxel out of all sample points in the mini-batch. These are only used to decide len_inpc and len_inpf.

Thanks for your detailed explanation!

@naruya naruya closed this as completed Feb 16, 2022
@naruya naruya pinned this issue Feb 16, 2022
@naruya naruya unpinned this issue Aug 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants