You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, loved your work and currently trying to add it to some neural field research. I have a question about the batch size of a ray in your formulation:
1/ Suppose a batched ray size (batch, num_rays, 3). How would we construct a patch P? I'm currently iterating through each batch (num_rays, 3) batch times and averaging the results. However I noticed here that the loss varies with different batch size (more batch -> higher losses). Is this consistent with your formulation?
2/ Have you tested with other patch size other than 64x64. Does bigger patch size have any affects on perfomance?
The text was updated successfully, but these errors were encountered:
We use the final rgb piexl(batch, 3) instead of intermediate rgb points(batch, num_rays, 3) to make a virtual patch. For example, (4096, 3) --> (64, 64, 3).
We use 64 x 64 because in original nerf methods(DVGO, TensoRF) use 4096 as default setting. Due to the repeat time in our methods, I think s3im already have enough randomness, so a bigger patch size maybe not that helpful.
Hi there, loved your work and currently trying to add it to some neural field research. I have a question about the batch size of a ray in your formulation:
1/ Suppose a batched ray size (batch, num_rays, 3). How would we construct a patch P? I'm currently iterating through each batch (num_rays, 3) batch times and averaging the results. However I noticed here that the loss varies with different batch size (more batch -> higher losses). Is this consistent with your formulation?
2/ Have you tested with other patch size other than 64x64. Does bigger patch size have any affects on perfomance?
The text was updated successfully, but these errors were encountered: