Skip to content

Fine-grained Attention and Feature-sharing Generative Adversarial Networksfor Single Image Super-Resolution

Notifications You must be signed in to change notification settings

Rainyfish/FASRGAN-and-Fs-SRGAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Fine-grained Attention and Feature-sharing Generative Adversarial Networks for Single Image Super-Resolution

This Paper has been accepted for publication on IEEE Transaction on Multi-media.

Contents

  1. Proposed Methods
  2. Results and codes
  3. Experiment comparisons

1.Proposed Methods

We propose two novel techniques in the generative adversarial networks to produce photo-realistic images for image super-resolution in this paper.

FASRGAN

Instead of producing a single score to discriminate images between real and fake, we propose a variant, called Fine-grained Attention Generative Adversarial Network for image super-resolution (FASRGAN), to discriminate each pixel between real and fake.
U-net

Fs-SRGAN

Instead of using different networks for the generator and the discriminator in the SR problem, we use a feature-sharing network (Fs-SRGAN) for both the generator and the discriminator. Shared_extractor

2.Results and codes

How the evaluate the results

We evaluated our methods on several datasets in terms of PSNR/SSIM/PI/LPIPS, where PSNR/SSIM are used to evaluation the accuracy of SR images, and PI/LPIPS are adopted to evaluate the perceptual quality. Perceptual Index (PI) is used in The PIRM Challenge on Perceptual Super-Resolution, and Learned Perceptual Image Patch Similarity (LPIPS) metric is proposed in the work "The Unreasonable Effectiveness of Deep Features as a Perceptual Metric", which evaluates the distance between image patches. Both of them with lower value means more similar.

You can use the codes in Test_scripts to calculate the PSNR/SSIM/PI/LPIPS:

  1. Download the results of our methods at Baidu Netdisk (code: 6q7p) or Google drive.
  2. Move these Results into the 'Test_scripts/Results' folder.
  3. Run the scripts in Test_scripts for testing.

Source code and models

The codes of model are defined in /codes/models.

And the pre-trained models can be downloaded from Baidu Netdisk (code: 723l) or Google drive.

How to train

python train.py -opt /options/train/train_FASRGAN.json python train.py -opt /options/train/train_FsSRGAN.json

How to test

python test.py -opt /options/test/test_FASRGAN.json python test.py -opt /options/test/test_FsSRGAN.json

3.Experiment comparisons

Quantitative results with the bicubic degradation model. best and second best results are highlighted and underlined, respectively.

bicubic-degradation

The trade-off of RMSE and LPIPS on Urban100 of our methods and the state-of-the-art methods for $4\times$ super-resolution

LPIPS_RMSE

Qualitative comparisons of FASRGAN on benchmark datasets between our methods and state-of-the-art methods

Q_1

Qualitative comparisons of Fs-SRGAN on benchmark datasets between our methods and state-of-the-art methods

Q_1

Object Recognition Performance of resnet-50 between our methods and state-of-the-art methods

Q_1

Citation

If you find this repository useful for your research, please use the following.

  @ARTICLE{9377002,  author={Y. {Yan} and C. {Liu} and C. {Chen} and X. {Sun} and L. {Jin} and P. {Xinyi} and X. {Zhou}},  journal={IEEE Transactions on Multimedia},   title={Fine-grained Attention and Feature-sharing Generative Adversarial Networks for Single Image Super-Resolution},   year={2021},  volume={},  number={},  pages={1-1},  doi={10.1109/TMM.2021.3065731}}

Acknowledgements

The repository is built on the BasicSR repository.

About

Fine-grained Attention and Feature-sharing Generative Adversarial Networksfor Single Image Super-Resolution

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages