Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] From a single 2d image to 3d mesh (with 3d supervision, but without differentiable rendering?) #463

Closed
XingxinHE opened this issue Nov 24, 2020 · 1 comment
Assignees
Labels
call-for-collaboration User seeks collaborators for their project how to How to use PyTorch3D in my project

Comments

@XingxinHE
Copy link

XingxinHE commented Nov 24, 2020

Dear PyTorch3D team,

Thank you for the development of this wonderful library and your help on setting up this environment. I am a graduate student in Architecture and Urban Design from Politecnico di Milano. I am passionate on applying a mature AI framework in the AEC industry and currently doing research on 3d reconstruction of a 3d printing pre-tension textile. I have a strong feeling that PyTorch3d could be the best library to do this.

I am wondering if you could give me some feedback or ideas on the following description?😎

TASK:
3D Reconstruction, from a single 2d image to 3d mesh

DATASET
I have a dataset of M objects paired with(INPUT: single image, OUTPUT: mesh model, .obj). I enclose a screenshot for a peek. Left is input, right is 3d-view of output.
image

🔥🔥🔥❗❗BOTTLENECK/DIFFERENCE FROM MeshRCNN or other 3d reconstruction jobs:
From the above image, you might notice there is no 'ground-truth' view on the mesh. Since the objective is to predict the fabric after tension release, the shape/mesh changed from the initial stage. I enclose a .gif to illustrate the deformation.
0_0_demo
You can also take a look at the following screenshot. The left is the PLA printed on the pre-tension textile while the right is after cropping from the fabric(tension released).
image

CONCLUSION
That said, the INPUT is the top-view of the fabric BEFORE deformation. The OUTPUT is a 3d mesh AFTER deformation(tension released), this is run by a Physical simulation(FEA-finite element analysis) and each INPUT is affected by a SAME Force Field.

MY THOUGHT
Since I am not from the field of Computer Vision, correct me if I am wrong😅.
If I understood the paper correctly, the MeshRCNN somehow considers the projection in the framework. For example, the Rol Align refers to the rotation perspective between image and 3d models. The Vert Align refers the vertex between image and 3d models. But in my case, it is a bit difficult to do such alignment since the deformation. Is it possible to decode the image and calculate the Voxel Loss only?

I don't know if I make the description compact. Looking forward to your favorable reply and feedback.

Sincerely thanks to the PyTorch3d team, Amitav Baruah, Steve Branson, Luya Gao, Georgia Gkioxari, Taylor Gordon, Justin Johnson, Patrick Labtut, Christoph Lassner, Wan-Yen Lo, David Novotny, Nikhila Ravi, Jeremy Reizenstein, Dave Schnizlein, Roman Shapovalov, Olivia Wiles.

Xingxin

@gkioxari gkioxari self-assigned this Nov 24, 2020
@gkioxari gkioxari added the how to How to use PyTorch3D in my project label Nov 24, 2020
@gkioxari
Copy link
Contributor

hi @XingxinHE

Thank you for reaching out. It seems you have a research project at hand and you are seeking for guidance for your project. Unfortunately, we can't consult you on this one. The github issue page is solely for reporting bugs in the library. I will tag your issue as call-for-collaboration in case other github users want to chime in.

@gkioxari gkioxari added the call-for-collaboration User seeks collaborators for their project label Nov 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
call-for-collaboration User seeks collaborators for their project how to How to use PyTorch3D in my project
Projects
None yet
Development

No branches or pull requests

2 participants