Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi_view_reconstruction with depth map #23

Closed
tharinduk90 opened this issue Jun 24, 2020 · 5 comments
Closed

multi_view_reconstruction with depth map #23

tharinduk90 opened this issue Jun 24, 2020 · 5 comments

Comments

@tharinduk90
Copy link

tharinduk90 commented Jun 24, 2020

Hi! Amazing paper, and very clean code! Thanks so much for releasing this.

I am trying to use multi_view_reconstruction with depth map for 3d reconstruction (ours_depth_mvs.yaml).

So far i can obtain results with multi_view_reconstruction without depth map (ours_rgb.yaml).

When i use the depth map with multi_view_reconstruction(ours_depth_mvs.yaml) , i did not get any output.i.e output in empty.
The depth values are in line with the camera projection matrixes(both are in mms).My dataset is below attached.

scan155.zip

i have refer below issues, when generating the data set.

#3
#16

q-1) Can u help me with this?
q-2) Does the depth map should be too perfect?

@tharinduk90 tharinduk90 changed the title multi_view_reconstruction with depthmap multi_view_reconstruction with depth map Jun 24, 2020
@tharinduk90
Copy link
Author

tharinduk90 commented Jun 29, 2020

using the https://github.com/intel-isl/Open3D/blob/master/examples/python/Basic/rgbd_image.ipynb, i visualize the depth maps.
the result for 000020.jpg with different angles is attached below.

000020

original image

depth_20_angle1

point cloud with angle 1

this is the point cloud
https://drive.google.com/file/d/13LZMxohl3a0IdkJvw5c3bUMcVne6hRY1/view?usp=sharing

the above point cloud file can be view with https://www.creators3d.com/online-viewer

@m-niemeyer
Copy link
Collaborator

Hi @tharinduk90 , I was now able to have a look at your data.

The depth you provide is very noisy / not accurate. The problem with your visualization above is that you only show the predicted points if you use a single image, and also you show it from the same view as shown in the image. Then, it will always look good! The question is rather how well does the point cloud look if you use all the views, and project them all into the world.

Here is a screenshot of the point cloud if you use all 22 views:
image

And here the respective ply file (zipped).

As you can see, the depth is so noisy that it does not lead to a consistent model. Hence, if you train with this depth information, it will rather worsen the results than improve them. If you want to improve your results, a good tool to obtain camera parameters as well as depth maps for multi-view images is Colmap.

Good luck with your research !

@tharinduk90
Copy link
Author

tharinduk90 commented Jun 30, 2020

@m-niemeyer , thank you very much for the clarification, can u share the script to create the point cloud ? So i can verify my depth map?

@m-niemeyer
Copy link
Collaborator

Hi @tharinduk90 , thanks a lot for your message.

I have now created a FAQ file describing the camera matrices and also an example script of how pixels can be projected to the object-centric world space using the matrices and depth values: https://github.com/autonomousvision/differentiable_volumetric_rendering/blob/master/project_pixels_to_world_example.py

I hope this helps. Good luck!

@tharinduk90
Copy link
Author

@m-niemeyer , thank you very much !!!!!!!!!!!!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants