A complete pipeline for generating scene point clouds and meshes.
Before starting, I assume that you have completed the installation of colmap and the configuration of the required python environment for cascade-stereo/casmvsnet_pl/ConvONet.
1.Register Image Pose[COLMAP]
Use My Desk data for example.
Use imgs2poses.py(from llff) to call COLMAP to run structure from motion to get 6DoF image poses and near/far depth bounds for the scene.
python imgs2poses.py --scenedir xxx/xxx/xxx
Result will be saved in scenedir, and sparse model will be saved in scenedir/sparse/0
python sparse2dense.py --scenedir xxx/xxx/xxx
In this example, 6.6mins for stereo patch and 0.02mins for stereo fusion.
The results show that in low temperature textured regions, traditional depth restoration methods cannot achieve satisfactory results:
In order to use CasMVSNet, the data needs to be transformed by(from casmvsnet/CasMVSNet)) :
python colmap2mvsnet.py --dense_folder xxx/dense --save_folder xxx/casmvsnet
Note: Before converting the colmap results, you need make sure that images are undistorted by COLMAP and saved in xxx/dense folder.
Use CasMVSNet to generate depth map, pretrained model download here.
python test.py --dataset=general_eval --batch_size=1 --testpath_single_scene=xxx/casmvsnet --loadckpt=xxx/casmvsnet.ckpt --testlist=all --outdir=xxx/mvs --interval_scale=1.06
Only use 2mins to get fusion pointcloud.
Also, you can use CasMVSNet_pl to get depth and fusion.[TODO]
[TODO]
Just run the scripts to do all things!
# CasMVSNet depth pre
bash scripts/run_genpoints_casmvsnet.sh
# COLMAP depth pre
bash scripts/run_genpoints_colmap.sh
Thanks excellent work COLMAP.Thanks to Xiaodong Gu for his excellent work Cascade-Stereo.Thanks to kwea123 for his excellent work CasMVSNet_pl.Thanks to Songyou Peng for his excellent work Convolutional Occupancy Networks.Finally, thanks to Yao Yao for his excellent work MVSNet for its contribution to deep learning deep reconstruction.