Skip to content
/ LFFN Public

Tensorflow code for our paper "Lightweight Feature Fusion Network for Single Image Super-Resolution" (SPL2019)

Notifications You must be signed in to change notification settings

qibao77/LFFN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lightweight Feature Fusion Network for Single Image Super-Resolution

This repository is Tensorflow code for our proposed LFFN.(However, the data loading part is based on pytorch.)

The code is based on DCSCN and BasicSR, and tested on Ubuntu 16.04 environment (Python 3.6, Tensorflow 1.4, Pytorch 0.4.0., CUDA 8.0) with 1080Ti GPU.

The architecture of our proposed lightweight feature fusion network (LFFN). The details about our proposed LFFN can be found in our main paper.

If you find our work useful in your research or publications, please consider citing:

@article{yang2019lightweight,
  title={Lightweight Feature Fusion Network for Single Image Super-Resolution},
  author={Yang, Wenming and Wang, Wei and Zhang, Xuechen and Sun, Shuifa and Liao, Qingmin},
  journal={IEEE Signal Processing Letters},
  volume={26},
  number={4},
  pages={538--542},
  year={2019},
  publisher={IEEE}
}

@article{yang2019lightweight,
  title={Lightweight Feature Fusion Network for Single Image Super-Resolution},
  author={Yang, Wenming and Wang, Wei and Zhang, Xuechen and Sun, Shuifa and Liao, Qingmin},
  journal={arXiv preprint arXiv:1902.05694},
  year={2019}
}

Contents

  1. Test
  2. Results
  3. Acknowlegements

Test

  1. Clone this repository:

    git clone https://github.com/qibao77/LFFN-master.git
  2. Download our trained models from BaiduYun(code:en3h), place the models to ./models. We have provided three small models (LFFN_x2_B4M4_depth_div2k/LFFN_x3_B4M4_depth_div2k/LFFN_x4_B4M4_depth_div2k) and the corresponding results in this repository.

  3. Place SR benchmark (Set5, bsd100, Urban100 and Manga109) or other images to ./data/test_data/*.

  4. You can edit ./helper/args.py according to your personal situation.

  5. Then, run following command for evaluation:

    python evaluate.py
  6. Finally, SR results and PSNR/SSIM values for test data are saved to ./models/*. (PSNR/SSIM values in our paper are obtained using matlab)

Results

Quantitative Results

Benchmark SISR results. Average PSNR/SSIM for scale factor x2, x3 and x4 on datasets Set5, Manga109, bsd100 and Urban100.

Visual Results

Visual comparison for x3 SR on “img013”, “img062”, “img085”from the Urban100 dataset.

Acknowledgements

  • Thank DCSCN. Our code structure is derived from it.
  • Thank BasicSR. They provide many useful codes which facilitate our work.

About

Tensorflow code for our paper "Lightweight Feature Fusion Network for Single Image Super-Resolution" (SPL2019)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages