Jiang He, Qiangqiang Yuan, Jie Li, Yi Xiao, Denghong Liu, Huanfeng Shen, and Liangpei Zhang, Wuhan University
-
Codes for the paper entitled as "Spectral super-resolution meets deep learning: achievements and challenges" published in Information fusion.
-
Benchmark about deep learning-based spectral super-resolution algorithms, including the workflows of spectral recovery, colorization, and spectral compressive imaging.
We give some classical datasets in three applications:
The dataset used in spectral recovery is a public hyperspectral image data named ARAD_1K which is released in NTIRE 2022.
We only give the RGB images in github. The label of Training dataset can be downloaded from Zenodo.
Please put the traning dataset into ./2SR/dataset/
, and unzipped it as .\2SR\dataset\Train_Spec\
.
We used only part of SUN dataset. Details can be found in our paper.
We uploaded the test images in github. The training datasets can be downloaded from Zenodo.
Please put the traning dataset into ./1colorization/
, and name it as color_train.h5
.
We designed our SCI procedure following TSA-Net.
The ./3SCI/mask.mat
is the mask used in degradation. Training dataset can be downloaded from Zenodo.
Please put the traning dataset into ./3SCI/
, and name it as 26train_256_enhanced.h5
.
We have collected some classical spectral super-resolution algorithms, including DenseUnet [42], CanNet [45], HSCNN+ [50], sRCNN [53], AWAN [60], FMNet [69], HRNet [70], HSRnet [71], HSACS [73], GDNet [77], and SSDCN [79].
If you want to run your own models with this benchmark, you should put your model xxxxx.py
into the file ./models/
. And then, you should define your model in specific application. For example, if you want to run your model in spectral recovery, please define your model in ./2SR/model.py
.
The code has been tested on PyTorch 1.6.
We improved our implementation inspired by MST++.
Please cite:
@article{hj2023_DL4sSR,
title={Spectral super-resolution meets deep learning: achievements and challenges},
author={He, Jiang and Yuan, Qiangqiang and Li, Jie and Xiao, Yi and Liu, Denghong and Shen, Huanfeng and Zhang, Liangpei},
journal={Information Fusion},
volume={97},
pages={101812},
year={2023},
}