-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How could I render the result meshes? #12
Comments
Hi, Thanks for your questions. I think you need to first figure out which coordinate system that reconstructed meshes lie in. When you make it under the camera space, you could do the rendering. Here, I attach my visualization script. You could try to adapt it and run it on the ground-truth meshes (should be under the camera space) to see whether it works. Then, you can render reconstructed meshes. Hope it helps.
|
Thanks for your fast reply. |
Sorry for bothering you. I found that your codes fit the predicted hand mesh into the GT hand mesh through the ICP solver. gSDF/playground/hsdf_osdf_2net_pa/recon.py Lines 350 to 366 in 05101b5
|
For your case, I currently don't have available codes to address it. I think you could have a look at Pytorch3D, and I remember that they provide different options for rendering meshes. |
Please provide the meaning of the parameters in the above script. |
Hi, thanks for sharing interesting work.
I'm trying to render reconstructed hand&object meshes onto a 256x256 input image, like your reported figure.
But, I found that the hand&object vertices are not in camera space. (may be in canonical grid space)
How could I render the results?
gSDF/playground/hsdf_osdf_2net_pa/recon.py
Line 233 in 05101b5
The text was updated successfully, but these errors were encountered: