Skip to content
Switch branches/tags
Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

s-LWSR: Super Lightweight Super-Resolution Network

This is the code of the paper in following:

Biao Li, Bo Wang, Jiabin Liu, Zhiquan Qi and Yong Shi"s-LWSR: Super Lightweight Super-Resolution Network", [arXiv] [Accepted by IEEE Transactions on Image Processing]

The code is built on EDSR (PyTorch) and RCAN(Pytorch), and tested on Ubuntu 18.04 environment (Python3.7, PyTorch_1.0) with Titan Xp GPU.


  1. Introduction
  2. Train
  3. Test
  4. Results
  5. Citation
  6. Acknowledgements


Deep learning (DL) architectures for superresolution (SR) normally contain tremendous parameters, which has been regarded as the crucial advantage for obtaining satisfying performance. However, with the widespread use of mobile phones for taking and retouching photos, this character greatly hampers the deployment of DL-SR models on the mobile devices. To address this problem, in this paper, we propose a super lightweight SR network: s-LWSR. There are mainly three contributions in our work. Firstly, in order to efficiently abstract features from the low resolution image, we build an information pool to mix multi-level information from the first half part of the pipeline. Accordingly, the information pool feeds the second half part with the combination of hierarchical features from the previous layers. Secondly, we employ a compression module to further decrease the size of parameters. Intensive analysis confirms its capacity of trade-off between model complexity and accuracy. Thirdly, by revealing the specific role of activation in deep models, we remove several activation layers in our SR model to retain more information for performance improvement. Extensive experiments show that our s-LWSR, with limited parameters and operations, can achieve similar performance to other cumbersome DL-SR methods.


Prepare training data

Our experiments are similar as RCAN:

  1. Download DIV2K training data (800 training + 100 validtion images) from DIV2K dataset and put in the file DIV2K.

  2. Carefully check the dir of HR and LR images following the option file. Moreover, '--ext' of is set as 'sep_reset', which firstly convert .png to .npy. If all the training images (.png) are converted to .npy files, then set '--ext sep' to skip converting files.

For more informaiton, please refer to EDSR(PyTorch) and RCAN(Pytorch).

Begin to train

  1. Cd to 'Train/code', run the following scripts to train models.

    You can use scripts in file 'Train' to train models as paper. If you want to more about our model setting, you can check in the model folder..

    BI, scale 2, 3, 4, 8
    #s-LWSR_BIX2_P48, input=48x48, output=96x96
    CUDA_VISIBLE_DEVICES=0 python --model LWSR --save s-LWSR_BIX2_P48 --scale 2 --n_feats 32  --reset --chop --save_results --print_model --patch_size 96 2>&1 | tee $LOG
    #s-LWSR_BIX3_P48, input=48x48, output=144x144
    CUDA_VISIBLE_DEVICES=0 python --model LWSR --save s-LWSR_BIX3_P48 --scale 3 --n_feats 32  --reset --chop --save_results --print_model --patch_size 144 2>&1 | tee $LOG
    #s-LWSR_BIX4_P48, input=48x48, output=192x192
    CUDA_VISIBLE_DEVICES=0 python --model LWSR --save s-LWSR_BIX4_P48 --scale 4  --n_feats 32  --reset --chop --save_results --print_model --patch_size 192 2>&1 | tee $LOG
    #s-LWSR_BIX8_P48, input=48x48, output=384x384
    CUDA_VISIBLE_DEVICES=0 python --model RCAN --save s-LWSR_BIX8_P48 --scale 8  --n_feats 32  --reset --chop --save_results --print_model --patch_size 384 2>&1 | tee $LOG


Quick start

  1. Download our pre-trained models s-LWSR(PyTorch) and place them in '/Test/model'. Please be make sure that the code and its corresponding pre-trained model are consistant, because there are several different settings contained in our files.

    We just train our model on X4 task and more information will be released soon.

  2. Cd to '/Test/code', run the following scripts.

    You can use scripts in file 'Test' to produce results for our paper.

    # No self-ensemble: RCAN
    # BI degradation model, X2, X3, X4, X8
    CUDA_VISIBLE_DEVICES=0 python --data_test MyImage --scale 2 --model LWSR --n_feats 32 --pre_train ../model/ --test_only --save_results --chop --save 'LWSR' --testpath /home/li/桌面/s-LWSR/Test/LR/LRBI --testset Set5
    CUDA_VISIBLE_DEVICES=0 python --data_test MyImage --scale 3 --model LWSR --n_feats 32 --pre_train ../model/ --test_only --save_results --chop --save 'LWSR' --testpath /home/li/桌面/s-LWSR/Test/LR/LRBI --testset Set5
    CUDA_VISIBLE_DEVICES=0 python --data_test MyImage --scale 4 --model LWSR --n_feats 32 --pre_train ../model/ --test_only --save_results --chop --save 'LWSR' --testpath /home/li/桌面/s-LWSR/Test/LR/LRBI --testset Set5				
    CUDA_VISIBLE_DEVICES=0 python --data_test MyImage --scale 8 --model LWSR --n_feats 32 --pre_train ../model/ --test_only --save_results --chop --save 'LWSR' --testpath /home/li/桌面/s-LWSR/Test/LR/LRBI --testset Set5
    CUDA_VISIBLE_DEVICES=0 python --data_test MyImage --scale 2 --model LWSR --n_feats 32 --pre_train ../model/ --test_only --save_results --chop --self_ensemble --save 'LWSRplus' --testpath /home/li/桌面/s-LWSR/Test/LR/LRBI --testset Set5
    CUDA_VISIBLE_DEVICES=0 python --data_test MyImage --scale 3 --model LWSR --n_feats 32 --pre_train ../model/ --test_only --save_results --chop --self_ensemble --save 'LWSRplus' --testpath /home/li/桌面/s-LWSR/Test/LR/LRBI --testset Set5
    CUDA_VISIBLE_DEVICES=0 python --data_test MyImage --scale 4 --model LWSR --n_feats 32 --pre_train ../model/ --test_only --save_results --chop --self_ensemble --save 'LWSRplus' --testpath /home/li/桌面/s-LWSR/Test/LR/LRBI --testset Set5
    CUDA_VISIBLE_DEVICES=0 python --data_test MyImage --scale 8 --model LWSR --n_feats 32 --pre_train ../model/ --test_only --save_results --chop --self_ensemble --save 'LWSRplus' --testpath /home/li/桌面/s-LWSR/Test/LR/LRBI --testset Set5

The whole test pipeline

  1. Prepare test data.

    Download the standard test sets (our model using Set5, Set14, BSD100, and Urban100. All test sets are available from GoogleDrive or Baidu) in 'OriginalTestData'.

  2. Conduct image SR.

    See Quick start

  3. Evaluate the results PSNR and SSIM.

    Run 'Evaluate_PSNR_SSIM.m' to obtain PSNR/SSIM values for paper.


Quantitative Results

Visual_PSNR_SSIM Visual_PSNR_SSIM Visual_PSNR_SSIM Quantitative conparing results.All images are chosen from four mentioned test datasets. ###Model structure Model structure The figure of our proposed s-LWSR.


If you find the code helpful in your resarch or work, please cite the following papers.

  author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
  title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month = {July},
  year = {2017}

    title={Image Super-Resolution Using Very Deep Residual Channel Attention Networks},
    author={Zhang, Yulun and Li, Kunpeng and Li, Kai and Wang, Lichen and Zhong, Bineng and Fu, Yun},

  title={s-LWSR: Super Lightweight Super-Resolution Network},
  author={Li, Biao and Liu, Jiabin and Wang, Bo and Qi, Zhiquan and Shi, Yong},
  journal={arXiv preprint arXiv:1909.10774},


This code is built on EDSR (PyTorch) and RCAN(Pytorch). We greatly thank the authors for sharing their codes of EDSR Torch version and RCAN(Pytorch).


s-LWSR: A Super Lightweight Super-Resolution Network




No releases published


No packages published