HiRN: Hierarchical Recurrent Neural Network for Video Super-Resolution (VSR) using Two-Stage Feature Evolution
This repository is the official PyTorch implementation of the paper published in Applied Soft Computing (Elsevier).
The aim of video super-resolution (VSR) is generate the high-resolution (HR) frames from their lowresolution (LR) counterparts. As one of the fundamental module of VSR, propagation process provides the path of feature map and specifies how the feature map is leveraged. In the recurrent propagation, the latent features can be propagated and aggregated. Therefore, adopting the recurrent strategy can resolve the limitation of sliding-window-based local propagation. Recently, bi-directional recurrent propagation-based latest methods have achieved powerful performance in VSR. However, existing bi-directional frameworks have structured by combining forward and backward branches. These structures cannot propagate and aggregate previous and future latent features of current branch. In this study, we suggest the hierarchical recurrent neural network (HiRN) based on feature evolution. The proposed HiRN is designed based on the hierarchical recurrent propagation and residual block-based backbone with temporal wavelet attention (TWA) module. The hierarchical recurrent propagation consists of two stages to combine advantages of low frame rate-based forward and backward schemes, and multi-frame rate-based bi-directional access structure. The proposed methods are compared with state-of-the-art (SOTA) methods on the benchmark datasets. Experiments show that the proposed scheme achieves superior performance compared with SOTA methods. In particular, the proposed HiRN achieves better performance than all compared methods in terms of SSIM on Vid4 benchmark. In addition, the proposed HiRN surpasses the existing GBR-WNN by a significant 3.03 dB in PSNR on REDS4 benchmark with fewer parameters.
-
Anaconda3
-
Python == 3.7
conda create --name hirn python=3.7
-
Trained on PyTorch 1.12.1 (>= 1.7) CUDA 10.2
Run in ./
pip install -r requirements.txt BASICSR_EXT=True python setup.py develop
We used REDS dataset for training and Vid4, REDS4 datasets for testing.
-
Prepare for REDS and REDS4
-
Please refer to Dataset.md in our Deep-Video-Super-Resolution repository for more details.
-
Download dataset from the official website.
-
Put the dataset in ./datasets/
-
-
Prepare for Vid4
-
Please refer to Dataset.md in our Deep-Video-Super-Resolution repository for more details.
-
Download dataset from here.
-
Put the dataset in ./datasets/
-
Generate LR data
Run in ./scripts/data_preparation/
python generate_LR_Vid4.py
-
Pre-trained model is available in below link.
Run in ./
-
Using single GPU
python basicsr/train.py -opt options/train/HiRN/train_HiRN_REDS.yml
-
Using multiple GPUs
For example, for 4 GPUs,
CUDA_VISIBLE_DEVICES=0,1,2,3 ./scripts/dist_train.sh 4 options/train/HiRN/train_HiRN_REDS.yml
Run in ./
-
Using single GPU
python basicsr/test.py -opt options/test/HiRN/test_HiRN_REDS.yml
python basicsr/test.py -opt options/test/HiRN/test_HiRN_Vid4.yml
-
Using multiple GPUs
For example, for 4 GPUs,
CUDA_VISIBLE_DEVICES=0,1,2,3 ./scripts/dist_test.sh 4 options/test/HiRN/test_HiRN_REDS.yml
CUDA_VISIBLE_DEVICES=0,1,2,3 ./scripts/dist_test.sh 4 options/test/HiRN/test_HiRN_Vid4.yml
@article{choi2023hirn,
title={HiRN: Hierarchical recurrent neural network for video super-resolution (VSR) Using two-stage feature evolution},
author={Choi, Young-Ju and Kim, Byung-Gyu},
journal={Applied Soft Computing},
pages={110422},
year={2023},
publisher={Elsevier}
}
The codes are heavily based on BasicSR. Thanks for their awesome works.
BasicSR :
@misc{basicsr,
author = {Xintao Wang and Liangbin Xie and Ke Yu and Kelvin C.K. Chan and Chen Change Loy and Chao Dong},
title = {{BasicSR}: Open Source Image and Video Restoration Toolbox},
howpublished = {\url{https://github.com/XPixelGroup/BasicSR}},
year = {2022}
}