Learning to deform mesh #1778
Unanswered
pieterblok
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a rather straightforward question: how can we properly learn to reconstruct a complete mesh / point cloud from a partial point cloud?
I'm inspired by the example deform_source_mesh_to_target_mesh. Yet as far as I understand there is nothing "learned" in this example: e.g. there is no neural network that captures the underlying deformation to reconstruct the complete / target point cloud (mesh).
Background for this question is the DeepSDF network that can learn to reconstruct a complete point cloud from a partial point cloud. I was able to make my own encoder-decoder approach using Pytorch3D's loss functions. The simple network compresses RGB-D frames from Intel Realsense camera into a 128 dimension latent vector (encoder) which is then used to reconstruct the complete 3D point cloud using the decoder. This is a approach similar to the one in #726.
Although I can visually inspect that my proposed encoder-decoder can partly reconstruct the target point cloud, it struggles of completing the actual shape: most points are clustered around the base of the target shape, without completing the far-right or far-left edges (e.g. density of predicted points is very high in the center of target shape and very low at the more extreme sides of the shape).
Currently, I'm using this very simple approach (pseudo code):
Is there a way to:
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions