Under construction...
If you have any question or suggestion, welcome to email me at here.
This repository is an PyTorch implementation of the paper
"Attention in Attention Network for Image Super-Resolution" [arXiv]
Visual results in the paper are availble at Google Drive or Baidu Netdisk (password: 7t74).
Unofficial TensorFlow implementation: https://github.com/Anuj040/superres
Dependecies: PyTorch==0.4.1 (Will be updated to support PyTorch>1.0 in the future)
You can download the test sets from Google Drive. Put the test data in ../Data/benchmark/
.
python main.py --scale 4 --data_test Set5 --pre_train ./experiment/model/aan_x4.pt --chop --test_only
If you use CPU, please add "--cpu".
- Download DIV2K training data from DIV2K dataset or SNU_CVLab.
- Specify
'--dir_data'
in option.py based on the data path.
For more informaiton, please refer to EDSR(PyTorch).
# SR x2
python main.py --scale 2 --patch_size 128 --reset --chop --batch_size 32 --lr 5e-4
# SR x3
python main.py --scale 3 --patch_size 192 --reset --chop --batch_size 32 --lr 5e-4
# SR x4
python main.py --scale 4 --patch_size 256 --reset --chop --batch_size 32 --lr 5e-4
For A2N-M, use 1x1 conv instead of 3x3 conv in non-attention branch, the code is here
Left: The most enhanced attention maps. Right: The most suppressed attention maps.
If you have any question or suggestion, welcome to email me at here.
If you find our work helpful in your resarch or work, please cite the following papers.
@misc{chen2021attention,
title={Attention in Attention Network for Image Super-Resolution},
author={Haoyu Chen and Jinjin Gu and Zhi Zhang},
year={2021},
eprint={2104.09497},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
This code is built on EDSR (PyTorch) and PAN. We thank the authors for sharing their codes.