Skip to content

Jee-King/TSAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

72 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TSAN: A Two-Stage Attentive Network for Single Image Super-Resolution

pytorch code for papar 'A Two-Stage Attentive Network for Single Image Super-Resolution'.

The code is built on EDSR (PyTorch) and tested on Ubuntu 16.04 environment with Titan 1080Ti/Xp GPUs.

Paper can be download from TSAN

Contents

  1. Introduction
  2. Prerequisites
  3. Train
  4. Test
  5. Performance
  6. Citing
  7. To-do-list

Introduction

Recently, deep convolutional neural networks (CNNs) have been widely explored in single image superresolution (SISR) and contribute remarkable progress. However, most of the existing CNNs-based SISR methods do not adequately explore contextual information in the feature extraction stage and pay little attention to the final high-resolution (HR) image reconstruction step, hence hindering the desired SR performance. To address the above two issues, in this paper, we propose a two-stage attentive network (TSAN) for accurate SISR in a coarse-to-fine manner. Specifically, a novel dilated residual block (DRB) is developed as a fundamental unit to extract contextual features efficiently. Based on DRB, we further design a multicontext attentive block (MCAB) to make the network focus on more informative contextual features. Moreover, we present an essential refined attention block (RAB) which could explore useful cues in HR space for reconstructing fine-detailed HR image. Extensive evaluations on four benchmark datasets demonstrate the efficacy of our proposed TSAN in terms of quantitative metrics and visual effects. pipeline

Prerequisites

  1. Python 3.6

2. PyTorch >= 0.4.0

  1. numpy
  2. skimage
  3. imageio
  4. matplotlib
  5. tqdm

Train

prepare training data

We used DIV2K dataset (1-800) to train our model. Please download it from here or SNU_CVLab. Extract the file and put it into the Train/dataset.

Training

Using --ext sep_reset argument on your first running.

You can skip the decoding part and use saved binaries with --ext sep argument in second time.

If you have enough memory, using --ext bin.

Test

Test dataset can be download from here.

Using pre-trained model for training, all test datasets must be pretreatment by Test/Prepare_TestData_HR_LR.m and all pre-trained model should be put into Test/model/.

You can introduce self-ensemble strategy to improve the performance by addding --self_ensemble.

More running instructions can be found in demo.sh.

Performance

x2 x3 x4

This implementation is for non-commercial research use only.

Citing

If you do publish a paper where this Work helped your research, Please cite the following papers in your publications.

@article{zhang2021tsan,
 title={A Two-Stage Attentive Network for Single Image Super-Resolution},
 author={Zhang, Jiqing and Long, Chengjiang and Wang, Yuxin and Piao, Haiyin and Mei, Haiyang and Yang, Xin and Yin, Baocai},
 journal={IEEE Transactions on Circuits and Systems for Video Technology},
 year={2021},
 publisher={IEEE}
}

About

A Two-Stage Attentive Network for Single Image Super-Resolution

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published