You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Shunsukesaito, amazing work! I want to get the depth map from the predict mesh (e.g. result_ryota.obj reconstructed from the sample imgs in the repo), but have trouble in reprojecting the mesh to (512*512) image plane. I use the Camera in lib.renderer.camera for projection but get a all-black image. Can you or anyone give me some tips on how to correctly set the camera and reproject the reconstructed mesh to depth map? Thanks a lot!
The text was updated successfully, but these errors were encountered:
The reconstructed mesh is in normalized image coordinates [-1,1]. If you want to align it with image pixels, you may need to do something like v = 512 * (0.5 * v + 0.5)
assuming 512 is the input image resolution.
Yes. The released PIFu is trained with weak perspective camera model (orthogonal projection + scale), so the equation above lets you align the reconstructed mesh with the input image in the pixel coordinate space.
Hi Shunsukesaito, amazing work! I want to get the depth map from the predict mesh (e.g. result_ryota.obj reconstructed from the sample imgs in the repo), but have trouble in reprojecting the mesh to (512*512) image plane. I use the Camera in
lib.renderer.camera
for projection but get a all-black image. Can you or anyone give me some tips on how to correctly set the camera and reproject the reconstructed mesh to depth map? Thanks a lot!The text was updated successfully, but these errors were encountered: