Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image Undistortion and Proxy-geometry Reconstruction #24

Closed
Xiaoming-Zhao opened this issue May 4, 2021 · 1 comment
Closed

Image Undistortion and Proxy-geometry Reconstruction #24

Xiaoming-Zhao opened this issue May 4, 2021 · 1 comment

Comments

@Xiaoming-Zhao
Copy link

Xiaoming-Zhao commented May 4, 2021

Hi Gernot, thanks a lot for sharing the code of your great work. I have a follow-up question regarding the mesh reconstruction after #22. As you mentioned

then use sparse_reconstruction_unknown_calib using only the images/poses that should be included in the source set. The rest remains the same.

If I understand correctly,

  1. you first call sparse_reconstruction_unknown_calib on all images to get all images's camera parameters.
  2. you run the COLMAP pipeline again (sparse_reconstruction_unknown_calib, dense_reconstruction, and delaunay_meshing) only on training images.

I have three questions about how to fit the setting mentioned in the paper:

  1. I am wondering how you undistort all images? Since image_undistorter is called under dense_reconstruction, does it mean that evaluation images will not be undistorted? If this is not the case, can I know what your pipeline is to undistort all of them?
  2. since you run the whole COLMAP pipeline from scratch only on training images, even if you first run sparse_reconstruction_unknown_calib on all images, the camera parameters will not be the same across two runs due to bundle adjustment? I am confused about this and hope you can provide some guidance or explanation.
  3. in the processed dataset you provide, we have depth maps for all images, no matter whether they are for training or evaluation. In the paper, you mention the depth maps come from MVS. However, if we only run COLMAP pipeline on training images, how can we get that high-quality depth maps for evaluation images from MVS?

Thanks a lot in advance.

https://github.com/intel-isl/FreeViewSynthesis/blob/33a31ee214a77a2fa074d3a10cedc09803ec2ceb/co/colmap.py#L864-L867

@griegler
Copy link
Contributor

you run the COLMAP pipeline again (sparse_reconstruction_unknown_calib, dense_reconstruction, and delaunay_meshing) only on training images.

Instead of sparse_reconstruction_unknown_calib I am running sparse_reconstruction_known_calib (https://github.com/intel-isl/FreeViewSynthesis/blob/master/co/colmap.py#L851), otherwise the cameras would not align anymore.

does it mean that evaluation images will not be undistorted?

No, I also undistorted the evaluation images.

the camera parameters will not be the same across two runs due to bundle adjustment?

See my answer above.

we have depth maps for all images

The depth maps we are using are rendered from the 3D geometry. Specifically, I used pyrender (https://github.com/mmatl/pyrender). I used the MVS depth maps only to create the 3D geoemtry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants