Learning Piecewise Planar Representation for RGB Guided Depth Super-Resolution, in IEEE Transactions on Computational Imaging (TCI), 2024. Ruikang Xu, Mingde Yao, Yuanshen Guan, Zhiwei Xiong.
- The NYU_v2 dataset can be downloaded from this link.
- The Middlebury dataset can be downloaded from this link.
- The Lu dataset can be downloaded from this link.
- The RGB-D-D dataset can be downloaded from this link.
- Training Set: We taking the first 1000 pairs from the NYU_v2 dataset as the training set and use the same preprocessing as FDSR and DCTNet.
- Test Set: We use the rest 449 pairs from NYU_v2, Middlebury, Lu and RGB-D-D as the testing set.
- Python 3.8.8, PyTorch 1.8.0, torchvision 0.9.0.
- NumPy 1.24.2, OpenCV 4.7.0, Tensorboardx 2.5.1, kornia, Pillow, Imageio.
cd ./src && python test.py
cd ./src && python train.py
Any question regarding this work can be addressed to xurk@mail.ustc.edu.cn.
If you find our work helpful, please cite the following paper.
@article{xu2024learning,
title={Learning Piecewise Planar Representation for RGB Guided Depth Super-Resolution},
author={Xu, Ruikang and Yao, Mingde and Guan, Yuanshen and Xiong, Zhiwei},
journal={IEEE Transactions on Computational Imaging},
year={2024},
publisher={IEEE}
}