You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Gernot, thanks a lot for sharing the code of your great work. I have a follow-up question regarding the mesh reconstruction after #22. As you mentioned
then use sparse_reconstruction_unknown_calib using only the images/poses that should be included in the source set. The rest remains the same.
If I understand correctly,
you first call sparse_reconstruction_unknown_calib on all images to get all images's camera parameters.
you run the COLMAP pipeline again (sparse_reconstruction_unknown_calib, dense_reconstruction, and delaunay_meshing) only on training images.
I have three questions about how to fit the setting mentioned in the paper:
I am wondering how you undistort all images? Since image_undistorter is called under dense_reconstruction, does it mean that evaluation images will not be undistorted? If this is not the case, can I know what your pipeline is to undistort all of them?
since you run the whole COLMAP pipeline from scratch only on training images, even if you first run sparse_reconstruction_unknown_calib on all images, the camera parameters will not be the same across two runs due to bundle adjustment? I am confused about this and hope you can provide some guidance or explanation.
in the processed dataset you provide, we have depth maps for all images, no matter whether they are for training or evaluation. In the paper, you mention the depth maps come from MVS. However, if we only run COLMAP pipeline on training images, how can we get that high-quality depth maps for evaluation images from MVS?
does it mean that evaluation images will not be undistorted?
No, I also undistorted the evaluation images.
the camera parameters will not be the same across two runs due to bundle adjustment?
See my answer above.
we have depth maps for all images
The depth maps we are using are rendered from the 3D geometry. Specifically, I used pyrender (https://github.com/mmatl/pyrender). I used the MVS depth maps only to create the 3D geoemtry.
Hi Gernot, thanks a lot for sharing the code of your great work. I have a follow-up question regarding the mesh reconstruction after #22. As you mentioned
If I understand correctly,
sparse_reconstruction_unknown_calib
on all images to get all images's camera parameters.sparse_reconstruction_unknown_calib
,dense_reconstruction
, anddelaunay_meshing
) only on training images.I have three questions about how to fit the setting mentioned in the paper:
image_undistorter
is called underdense_reconstruction
, does it mean that evaluation images will not be undistorted? If this is not the case, can I know what your pipeline is to undistort all of them?sparse_reconstruction_unknown_calib
on all images, the camera parameters will not be the same across two runs due to bundle adjustment? I am confused about this and hope you can provide some guidance or explanation.Thanks a lot in advance.
https://github.com/intel-isl/FreeViewSynthesis/blob/33a31ee214a77a2fa074d3a10cedc09803ec2ceb/co/colmap.py#L864-L867
The text was updated successfully, but these errors were encountered: