Skip to content

PyTorch implementation of Frequency-based Enhancement Network for Efficient Super-Resolution. (IEEE Access2022)

Notifications You must be signed in to change notification settings

pbehjatii/FENet-PyTorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FENet-PyTorch

This is repository is an official PyTorch implementation of the paper "Frequency-based Enhancement Network for Efficient Super-Resolution".

IEEE Access, 2022. [Paper]

Requirements

Contents

  1. Introduction
  2. Dataset
  3. Testing
  4. Training
  5. Results
  6. Citetion

Introduction

Recently, deep convolutional neural networks (CNNs) have provided outstanding performance in single image super-resolution (SISR). Despite their remarkable performance, the lack of high-frequency information in the recovered images remains a core problem. Moreover, as the networks increase in depth and width, deep CNN-based SR methods are faced with the challenge of computational complexity in practice. A promising and under-explored solution is to adapt the amount of compute based on the different frequency bands of the input. To this end, we present a novel Frequency-based Enhancement Block (FEB) which explicitly enhances the information of high frequencies while forwarding low-frequencies to the output. In particular, this block efficiently decomposes features into low- and high-frequency and assigns more computation to high-frequency ones. Thus, it can help the network generate more discriminative representations by explicitly recovering finer details. Our FEB design is simple and generic and can be used as a direct replacement of commonly used SR blocks with no need to change network architectures. We experimentally show that when replacing SR blocks with FEB we consistently improve the reconstruction error, while reducing the number of parameters in the model. Moreover, we propose a lightweight SR model --- Frequency-based Enhancement Network (FENet) --- based on FEB that matches the performance of larger models. Extensive experiments demonstrate that our proposal performs favorably against the state-of-the-art SR algorithms in terms of visual quality, memory footprint, and inference time.

Dataset

We use DIV2K dataset for training and Set5, Set14, B100, and Urban100 dataset for the benchmark test. Here are the following steps to prepare datasets.

  1. Download DIV2K and unzip on dataset directory as below:
dataset
└── DIV2K
    ├── DIV2K_train_HR
    ├── DIV2K_train_LR_bicubic
    ├── DIV2K_valid_HR
    └── DIV2K_valid_LR_bicubic
  1. To accelerate training, we first convert training images to h5 format as follow (h5py module has to be installed).
$ python div2h5.py
  1. Other benchmark datasets can be downloaded in Google Drive. Same as DIV2K, please put all the datasets in dataset directory.

Testing

We provide the pretrained models in checkpoint directory. All visual results of FENet for scale factor x2, x3, and x4 (BI) can be downloaded here. To test FENet on benchmark datasets:

# Scale factor x2
$ python sample.py      --test_data_dir dataset/<dataset> --scale 2 --ckpt_path ./checkpoint/<path>.pth --sample_dir <sample_dir>

# Scale factor x3                
$ python sample.py      --test_data_dir dataset/<dataset> --scale 3 --ckpt_path ./checkpoint/<path>.pth --sample_dir <sample_dir>

# Scale factor x4
$ python sample.py      --test_data_dir dataset/<dataset> --scale 4 --ckpt_path ./checkpoint/<path>.pth --sample_dir <sample_dir>

Training

Here are our settings to train FENet. Note: We use two GPU to utilize large batch size, but if OOM error arise, please reduce batch size.

# Scale factor x2
$ python train.py --patch_size 64 --batch_size 64 --max_steps 600000 --lr 0.001 --decay 150000 --scale 2  

# Scale factor x3
$ python train.py --patch_size 64 --batch_size 64 --max_steps 600000 --lr 0.001 --decay 150000 --scale 3  

# Scale factor x4
$ python train.py --patch_size 64 --batch_size 64 --max_steps 600000 --lr 0.001 --decay 150000 --scale 4                 
                      

Results

We achieved state-of-the-art performance on lightweigh image SR, denoising and deblurring. All visual results (BI, BD, and DN) of FENet can be downloaded here.

Lightweight Single Image Super-Resolution (click me)

Image denoising and deblurring (click me)

Citation

@article{behjati2022frequency,
  title={Frequency-Based Enhancement Network for Efficient Super-Resolution},
  author={Behjati, Parichehr and Rodriguez, Pau and Tena, Carles Fern{\'a}ndez and Mehri, Armin and Roca, F Xavier and Ozawa, Seiichi and Gonz{\`a}lez, Jordi},
  journal={IEEE Access},
  volume={10},
  pages={57383--57397},
  year={2022},
  publisher={IEEE}
}

Please also see our other works:

  • Single image super-resolution based on directional variance attention network - Pattern Recognition, 2022. [Paper] [Code]

  • OverNet: Lightweight Multi-Scale Super-Resolution with Overscaling Network - WACV, 2022- [Paper] [Code]

  • Hierarchical Residual Attention Network for Single Image Super-Resolution [arXiv]