This is the official pytorch implementation repository of Exploiting Spatial and Angular Correlations With Deep Efficient Transformers for Light Field Image Super-Resolution. (IEEE TMM 2023)
Following BasicLFSR, we use five datasets, including EPFL, HCInew, HCIold, INRIA and STFgantry for training and testing. Please download the datasets in the official repository of BasicLFSR.
Besides, we use three datasets, including UrbanLF, DLFD, SLFD to validate the effectiveness of LF-DET for addressing disparity variation in LF-SSR. Please first download the datasets via Baidu Drive (key:lv31) .
- pytorch 1.8.0 + torchvision 0.9.0 + cuda 10.2 + python 3.8.10
- matlab
Please refer to BasicLFSR for detailed introduction.
- Run
python train.py
- The specific configuration information is in
config.py
which can be changed.
- Run
python test.py
- The specific configuration information is in
config.py
which can be changed. - The folder
pretrain
contains our pre-trained models with default configuration information for 2x SR and 4x SR.
Our work and implementations are inspired and based on the following projects:
We sincerely thank the authors for sharing their code and amazing research work!
If you find this work helpful, please consider citing the following papers:
@article{cong2023lfdet,
title={Exploiting Spatial and Angular Correlations With Deep Efficient Transformers for Light Field Image Super-Resolution},
author={Cong, Ruixuan and Sheng, Hao and Yang, Da and Cui, Zhenglong and Chen, Rongshan},
journal={IEEE Transactions on Multimedia},
year={2023},
publisher={IEEE}
}
If you have any questions regarding this work, please send an email to congrx@buaa.edu.cn .