You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, I did have a few questions and concerns that I would like to discuss with you. I noticed in your paper that you pointed out the limitation of NeuralUDF that its performance may degrade on textureless objects. From looking at the examples in the paper, I can see that most of them are quite colorful. I was wondering how NeuralUDF performs on pure-colored objects and if this is a known issue.
Additionally, I would like to understand the reason behind this limitation. Is it due to the extra computation required by the visibility indicator function or some other differences from the SDF methods? Specifically, is it possible that the higher degrees of freedom in NeuralUDF might cause difficulties in regions where the depth changes drastically, such as in the collar region, or that NeuralUDF might assume the existence of holes in the continuous surface?
Thank you!
The text was updated successfully, but these errors were encountered:
Thanks for your interest in our work.
For texture-less objects. You are right, it's difficult for NeuralUDF to reconstruct the pure-colored objects, since neuraludf relies on feature correspondences to recover the geometry.
For the texture-less objects, SDF-based methods also struggle to reconstruct them. Since SDF representation holds a strong assumption (inside/outside), they can still contain a smooth and continuous surface.
For UDF, it will be more flexible and lack constraints, without reliable multi-view consistency from texture, neuraludf cannot produce reliable surfaces.
Hello,
It's a fascinating piece of work.
However, I did have a few questions and concerns that I would like to discuss with you. I noticed in your paper that you pointed out the limitation of NeuralUDF that its performance may degrade on textureless objects. From looking at the examples in the paper, I can see that most of them are quite colorful. I was wondering how NeuralUDF performs on pure-colored objects and if this is a known issue.
Additionally, I would like to understand the reason behind this limitation. Is it due to the extra computation required by the visibility indicator function or some other differences from the SDF methods? Specifically, is it possible that the higher degrees of freedom in NeuralUDF might cause difficulties in regions where the depth changes drastically, such as in the collar region, or that NeuralUDF might assume the existence of holes in the continuous surface?
Thank you!
The text was updated successfully, but these errors were encountered: