New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about goal of project #8
Comments
This project proposes a solution to a common issue when optimizing triangle meshes with differentiable rendering. The code in this repo works on synthetic examples: we load the source and target meshes, render the target mesh and use the rendered images as the objective for the optimization. This allows us to sidestep other challenges for object reconstruction that are orthogonal to our contribution (e.g. lighting estimation) Reconstructing real-world scenes is a much more challenging problem, but this method tries to take a step in that direction. Closing this as it is not a code issue. Feel free to reach out by email if you have other questions about the paper. |
Hi, there I think the goal of the research is to seek the uncertainty and take a little bit to get the final target. Real world reconstruction mesh was very hard problem since no ground truth emissions, and also the surface material even though the camera parameters was hard to get the unbiased. So maybe we can think this work of a big picture, if the synthetic data can do well, how to handle real scene next? The whole area will be greater if more and more works contribute a little bit. So look more on the good point of the work and learn from them can be well. |
I compiled and started the project, watches videos and papers but still can't understand the purpose of project. Is it reconstruction from images or this solutions solving one of the problems of area with reconstruction from images?
In sources I can see source and destination model no images. Is this method showing how to get the same model like in target with simple in source but not from images?
Is I understand correctly: The project giving target model -> render it from different positions and using this images for reconstruct the scene back?
If the method using light of areas for reconstruct normals and mesh how far is it from using with photos from real life?
The text was updated successfully, but these errors were encountered: