This is the Pytorch implementation of our work on depth completion.
S. Zhao, M. Gong, H. Fu and D. Tao. Adaptive Context-Aware Multi-Modal Network for Depth Completion. (IEEE Trans. Image Process.) Arxiv(Early Version) IEEE(Final Version)
- Python 3.6
- PyTorch 1.2.0
- CUDA 10.0
- Ubuntu 16.04
- Opencv-python
- pip install pointlib/.
Prepare the dataset according to the datalists (*.txt in datasets)
datasets
|----kitti
|----depth_selection
|----val_selection_cropped
|----...
|----test_depth_completion_anonymous
|----...
|----rgb
|----2011_09_26
|----...
|----train
|----2011_09_26_drive_0001_sync
|----...
|----val
|----2011_09_26_drive_0002_sync
|----...
run
bash run_train.sh
run
bash run_eval.sh (sval.txt for selected_validation, val for validation) or bash run_test.sh (for submission)
@article{zhao2021adaptive,
title={Adaptive context-aware multi-modal network for depth completion},
author={Zhao, Shanshan and Gong, Mingming and Fu, Huan and Tao, Dacheng},
journal={IEEE Transactions on Image Processing},
year={2021},
publisher={IEEE}
}
Shanshan Zhao: szha4333@uni.sydney.edu.au or sshan.zhao00@gmail.com