Skip to content
/ MFRAN Public

Image super-resolution with multi-scale fractal residual attention network(Pytorch)

Notifications You must be signed in to change notification settings

vanbou/MFRAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MFRAN-PyTorch

[Image super-resolution with multi-scale fractal residual attention network]([vanbou/MFRAN (github.com))), Xiaogang Song, Wanbo Liu, Li Liang, Weiwei Shi, Guo Xie, Xiaofeng Lu, Xinhong Hei

Graphical abstract

Introduction

  • src/data are used to process the dataset

  • src/loss stores the loss function

  • src/model sotres the proposed model and the tool classes for calculating the number of parameters

  • src/main.py is the main function

  • src/option.py is set for the training/test parameter

  • src/template.py provides training/testing templates

  • src/trainer.py is the training code

  • src/utility.py is a utility class

  • src/videotester.py is used to process video

In test/, PSNR and SSIM are used to test SR image, and can also be used for data set generation. The running environment is matlab

Dependencies

  • Python 3.6

  • PyTorch >= 1.7

  • numpy

  • skimage

  • imageio

  • matplotlib

  • tqdm

  • cv2

  • torchstat (model params statistics)

Prepare work

Clone this repository into any place you want.

git clone https://github.com/vanbou/MFRAN

You can evaluate your models with widely-used benchmark datasets:

For these datasets, we first convert the result images to YCbCr color space and evaluate PSNR on the Y channel only. You can download benchmark datasets (250MB), you can also download from https://pan.baidu.com/s/1iX46n5fdNix3J0ANN0FItg Extract code:49mx or GoogleDrive https://drive.google.com/file/d/1-A7mMAr9chY3-aM9UlwHKszhtCtWmeqM/view?usp=sharing

Set --dir_demo <where_benchmark_folder_located> to evaluate the MFRAN with the benchmarks.

We used DIV2K dataset to train our model. Please download it from here (7.1GB).

Unpack the tar file to any place you want. Then, change the dir_data argument in src/option.py to the place where DIV2K images are located.

We recommend you to pre-process the images before training. This step will decode all png files and save them as binaries. Use --ext sep_reset argument on your first run. You can skip the decoding part and use saved binaries with --ext sep argument.

If you have enough RAM (>= 32GB), you can use --ext bin argument to pack all DIV2K images in one binary file.

After downloading the training set and the test set, copy the absolute path of DIV2K and fill it with --dir_data in option.py.

Meanwhile, fill the absolute path of benchmark with --dir_demo in option.py

How To Train

cd src   

Run command:

python main.py --scale 2 --save MFRAN_x2 --model MFRAN --epoch 1000 --batch_size 16 --patch_size 96

For different scale factors, change --scale (2,3,4) and patch-size(96,144,192), patch-size=48*(scale factor)

After downloading the training set and the test set, copy the absolute path of DIV2K and fill it with --dir_data in option.py. Likewise, fill the absolute path of benchmark with --dir_demo in option.py

How To Test

You can download pretrain model from

https://pan.baidu.com/s/1AdqwG6y1CCi0rJk8_zjrTQ Extract code:mjnp or Google Drive https://drive.google.com/file/d/1-CcEonXdh5-c3M5JYcVJsNwJilRCKM-B/view?usp=sharing

You can test our super-resolution algorithm with your images. Place your images in test folder. (like test/<your_image>) We support png and jpeg files.

cd src    
python main.py --template MFRAN_test --data_test Set5+Set14+B100+Urban100+Manga109 --save MFRAN_x2_result --pre_train weight/MFRAN-2x.pt

or

python main.py --data_test Set5+Set14+B100+Urban100+Manga109  --scale 4 --pre_train 'pretrain model path' --test_only  --chop

if you want to test DIV2K, add --data_range 801-900, if you need test self-ensemble method, add --self_ensemble

You can find the result images from experiment/ folder.

Meanwhile,you can test your results with another method,which is based RCAN:

cd test

run Evaluate_PSNR_SSIM.m

If you want to calculate the number of model parameters, execute the following command:

cd src/model
python cal_params.py

To test the number of MFRAN parameters for different scaling factors, change --scale in option.py

Results

We have published test records, detailed in results/

you can download our results form here:

Baidu Netdisk: https://pan.baidu.com/s/1sjf0TnQh-IwY33L5tSijxw Extract code:40y9

Google Drive: https://drive.google.com/file/d/1-FV5AE7yGiqcPbvjk5iGLLEwM6zKwtUd/view?usp=sharing

Appreciate

Our work is based on EDSR and [RCAN]](https://github.com/yulunzhang/RCAN) . Thank you for your contributions

About

Image super-resolution with multi-scale fractal residual attention network(Pytorch)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published