You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
Hi. Thanks a lot for providing the code for this work!
I'm a little confused about the canonical volume, is there only one canonical volume modeled for a sequence of images?
NR-NeRF doesn't seem to be able to model large non-rigid motions, such as a scene where a person is dancing.
The text was updated successfully, but these errors were encountered:
Hi,
yes, that's correct. There is only one canonical volume. Doing so
enforces correspondences across the entire sequence, which empirically
limits the amount of deformations that can be handled (due to the need
for correct long-term correspondences, which is an extremely difficult
problem). The upside is that it increases the amount of difference
between the input camera trajectory and novel view camera trajectory
that can be handled. It's a trade-off between (A) long-term
correspondences and interesting novel views and (B) size of
deformations. If (B) is more important, concurrent works like Neural
Scene Flow Fields (for example) might be better suited.
Hi. Thanks a lot for providing the code for this work!
I'm a little confused about the canonical volume, is there only one canonical volume modeled for a sequence of images?
NR-NeRF doesn't seem to be able to model large non-rigid motions, such as a scene where a person is dancing.
The text was updated successfully, but these errors were encountered: