Skip to content

Latest commit

 

History

History
75 lines (60 loc) · 3.37 KB

README.md

File metadata and controls

75 lines (60 loc) · 3.37 KB

The Devil is in the Details: Boosting Guided Depth Super-Resolution via Rethinking Cross-Modal Alignment and Aggregation

Xin-Ni Jiang#, Zeng-Sheng Kuang#, Chun-Le Guo*, Rui-Xun Zhang, Lei Cai, Xiao Fan, Chong-Yi Li*

[Paper] [Project Page]

model

Guided depth super-resolution (GDSR) involves restoring missing depth details using the high-resolution RGB image of the same scene. Previous approaches have struggled with the heterogeneity and complementarity of the multi-modal inputs, and neglected the issues of modal misalignment, geometrical misalignment, and feature selection. In this study, we rethink some essential components in GDSR networks and propose a simple yet effective Dynamic Dual Alignment and Aggregation network (D2A2). D2A2 mainly consists of 1) a dynamic dual alignment module that adapts to alleviate the modal misalignment via a learnable domain alignment block and geometrically align cross-modal features by learning the offset; and 2) a mask-to-pixel feature aggregate module that uses the gated mechanism and pixel attention to filter out irrelevant texture noise from RGB features and combine the useful features with depth features. By combining the strengths of RGB and depth features while minimizing disturbance introduced by the RGB image, our method with simple reuse and redesign of basic components achieves state-of-the-art performance on multiple benchmark datasets.

Setup

Dependencies

The conda environment with all required dependencies can be generated by running

conda env create -f environment.yml
conda activate GDSR-D2A2
cd models/Deformable_Convolution_V2
sh make.sh

Datasets

The NYUv2 dataset can be downloaded here. Your folder structure should look like this:

NYUv2
└───Depth
│   │   0.npy
│   │   1.npy
│   │   2.npy
│   │   ...
│   │   1448.npy 
└───RGB
│   │   0.jpg
│   │   1.jpg
│   │   2.jpg
│   │   ...
│   │   1448.jpg

Lu, Middlebury and RGBDD datasets are only used for testing and can be downloaded here.

Pretrained Model

Our pretrained model checkpoints are available in './pretrained/D2A2_x4', './pretrained/D2A2_x8' and './pretrained/D2A2_x16'.

Training

Please modify the '--dataset_dir' in file 'train.sh'. More options are available in file 'option.py'.

sh train.sh

Testing

Please modify the '--dataset_dir' in file 'test.sh'. More options are available in file 'option.py'.

sh test.sh

Citation

@article{jiang2024the,
        title={The Devil is in the Details: Boosting Guided Depth Super-Resolution via Rethinking Cross-Modal Alignment and Aggregation},
        author={Jiang, Xinni and Kuang, Zengsheng and Guo, Chunle and Zhang, Ruixun and Lei Cai and Xiao Fan and Li, Chongyi},
        journal={arXiv preprint arXiv:2401.08123},
        year={2024}
}