Skip to content
/ AMSA Public

This project is the official implementation of 'Coarse-to-Fine Embedded PatchMatch and Multi-Scale Dynamic Aggregation for Reference-based Super-Resolution', AAAI2022

Notifications You must be signed in to change notification settings

Zj-BinXia/AMSA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 

Repository files navigation

Coarse-to-Fine Embedded PatchMatch and Multi-Scale Dynamic Aggregation for Reference-based Super-Resolution (AAAI2022)

The code framework is mainly modified from BasicSR and MMSR (Now reorganized as MMEditing). Please refer to the original repo for more usage and documents.

Python 3.7 pytorch 1.4.0

[Paper] [Project Page]

Overview

Dependencies and Installation

  • Python >= 3.7
  • PyTorch == 1.4
  • CUDA 10.0
  • GCC 5.4.0
  1. Install Dependencies

    cd AMSA
    conda install pytorch=1.4.0 torchvision cudatoolkit=10.0 -c pytorch
    pip install mmcv==0.4.4
    pip install -r requirements.txt
  2. Install MMSR and DCNv2

    python setup.py develop
    cd mmsr/models/archs/DCNv2
    python setup.py build develop

Dataset Preparation

Please refer to Datasets.md for pre-processing and more details.

Get Started

Pretrained Models

Downloading the pretrained models from GoogleDrive and put them under experiments/pretrained_models folder.

Test

We provide quick test code with the pretrained model.

  1. Modify the paths to dataset and pretrained model in the following yaml files for configuration.

    ./options/test/test_AMSA.yml
    ./options/test/test_AMSA_mse.yml
  2. Run test code for models trained using GAN loss.

    python mmsr/test.py -opt "options/test/test_AMSA.yml"

    Check out the results in ./results.

  3. Run test code for models trained using only reconstruction loss.

    python mmsr/test.py -opt "options/test/test_AMSA_mse.yml"

    Check out the results in in ./results

Train

Downloading the pretrained feature extraction models from C2-Matching link and put "feature_extraction.pth" under experiments/pretrained_models folder.

All logging files in the training process, e.g., log message, checkpoints, and snapshots, will be saved to ./experiments and ./tb_logger directory.

  1. Train restoration network.
# add the path to *pretrain_model_feature_extractor* in the following yaml
# the path to *pretrain_model_feature_extractor* is the model obtained in C2-Matching
./options/train/stage3_restoration_gan.yml
python mmsr/train.py -opt "options/train/stage3_restoration_gan.yml"

# if you wish to train the restoration network with only mse loss
# prepare the dataset path and pretrained model path in the following yaml
./options/train/stage3_restoration_mse.yml
python mmsr/train.py -opt "options/train/stage3_restoration_mse.yml"

Results

Citation

If you find our repo useful for your research, please consider citing our paper:

@article{xia2022coarse,
title={Coarse-to-Fine Embedded PatchMatch and Multi-Scale Dynamic Aggregation for Reference-based Super-Resolution},
author={Xia, Bin and Tian, Yapeng and Hang, Yucheng and Yang, Wenming and Liao, Qingmin and Zhou, Jie},
booktitle={AAAI},
year={2022}
}

About

This project is the official implementation of 'Coarse-to-Fine Embedded PatchMatch and Multi-Scale Dynamic Aggregation for Reference-based Super-Resolution', AAAI2022

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published