Official implementation of Monocular Depth Estimation Network with Single-Pixel Depth Guidance
- python3.9
- pytorch==1.7.1
- cuda10.2
- pytorch3d==0.6.2
NYU-Depth-v2: Please follow the instructions in BTS to download the training/test set.
RGB-SPAD: Please follow the instructions in single spad depth to download the test set.
Pretrained models can be downloaded form here.
To reproduce the reported results in our paper, follow these steps:
Step1: download the trained models and put it in the ./trained_models
Step2: change the data and model paths in args_test_nyu.txt and args_test_real.txt
Step3: run "python evaluate.py args_test_nyu.txt" for NYU-Depth-v2 dataset
run "python evaluate_real.py args_test_real.txt" for real RGB-SPAD dataset
We follow the training strategy of Adabins
The code is based on Adabins and single spad depth.
@article{Lee:Phomonet,
author = {Hongjae Lee and Jinbum Park and Wooseok Jeong and Seung-Won Jung},
journal = {Opt. Lett.},
number = {3},
pages = {594--597},
publisher = {Optica Publishing Group},
title = {Monocular depth estimation network with single-pixel depth guidance},
volume = {48},
month = {Feb},
year = {2023},
url = {https://opg.optica.org/ol/abstract.cfm?URI=ol-48-3-594},
doi = {10.1364/OL.478375}