Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Boundary points of single view data? #34

Closed
heathentw opened this issue Jul 20, 2022 · 5 comments
Closed

Boundary points of single view data? #34

heathentw opened this issue Jul 20, 2022 · 5 comments

Comments

@heathentw
Copy link

As I understood, the input "points" was sampled from the boundary of the mesh, which is reconstructed from complete scan of a real world object; my question is how to get the sampled points (for input) when we only have the points cloud from a single view (e.g. one depth camera)? as we don't have the complete mesh reconstructed, we can't sample the area which have no depth point clouds right?

Thank you for your time.

@bharat-b7
Copy link
Owner

Boundary sampling is done only at training because at inference you have to query the entire 256^3 grid. For training with single view, you will have a complete shape for supervision during training. You can sample points from that.

@heathentw
Copy link
Author

Just to make sure does the training data all synthetic? so the partially scan real data will not involve during training

@bharat-b7
Copy link
Owner

Yes, you'll need full shape to supervise. This is to teach your network to complete the shape.

@heathentw
Copy link
Author

I see. Thank you very much.

@heathentw
Copy link
Author

@bharat-b7 May I ask for training IPNetMANO, do you use MANO parametric hand to produce fake data? will there be a domain gap between real/fake data? if so how will you guys handle it. thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants