Skip to content

JNNNNYao/LINF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local Implicit Normalizing Flow for Arbitrary-Scale Image Super-Resolution [CVPR 2023]

This is the official repository of the following paper:

Local Implicit Normalizing Flow for Arbitrary-Scale Image Super-Resolution (LINF)
Jie-En Yao*, Li-Yuan Tsao*, Yi-Chen Lo, Roy Tseng, Chia-Che Chang, Chun-Yi Lee

[arxiv] [Video]

If you are interested in our work, you can access ElsaLab for more and feel free to contact us.


Setup & Preparation

Environment setup

pip install torch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt

Data preparation

  1. SR benchmark datasets (Set5, Set14, BSD100, Urban100) You can find the download link in the repo jbhuang0604/SelfExSR
  2. DIV2K You should download DIV2K/DIV2K_train_HR and DIV2K/DIV2K_valid_HR with their corresponding downsampled versions for training and validation.
  3. Flickr2K

Checkpoints

Model Download
EDSR-baseline-LINF Google drive
RDN-LINF Google Drive
SwinIR-LINF Google Drive
RRDB-LINF (3x3 patch) Google drive

Training

Preliminary

  1. You should modify the root_path in the config files to the path of the datasets.
  2. For stage 1, you can use the config files located in configs/train-div2k. For stage 2, the config files are located in configs/fine-tune.
  3. If you want to resume training, please specify the the path of your checkpoint in the resume argument in the config file.
  4. The checkpoints will be automatically saved in ./save/<EXP_NAME>.

Launch your experiments

To launch your experiments, you can use the following command:

python train.py --config <CONFIG_PATH> --gpu <GPU_ID(s)> --name <EXP_NAME> --patch <PATCH_SIZE>

Stage 1

# EDSR
python train.py --config configs/train-div2k/train_edsr-flow.yaml --gpu 0 --name edsr
# RDN
python train.py --config configs/train-div2k/train_rdn-flow.yaml --gpu 0 --name rdn
# SwinIR
python train.py --config configs/train-div2k/train_swinir-flow.yaml --gpu 0 --name swinir
# RRDB patch DF2K
python train.py --config configs/train-div2k/train_rrdb-flow-DF2K.yaml --gpu 0 --patch 3 --name rrdb

Stage 2 (Fine-tuning)

# EDSR
python train.py --config configs/fine-tune/fine-tune_edsr-flow.yaml --gpu 0 --name edsr_finetune
# RDN
python train.py --config configs/fine-tune/fine-tune_rdn-flow.yaml --gpu 0 --name rdn_finetune
# SwinIR
python train.py --config configs/fine-tune/fine-tune_swinir-flow.yaml --gpu 0 --name swinir_finetune
# RRDB patch DF2K
python train.py --config configs/fine-tune/fine-tune_rrdb-flow-DF2K.yaml --gpu 0 --patch 3 --name rrdb_finetune

Evaluation

Preliminary

  1. You should modify the root_path in the config files to the path of the datasets.
  2. You can store visualization results using the arguments --sample <NUM_SAMPLES> and --name <VIS_RESULTS_NAME>. The generated images will be automatically saved in ./sample/<VIS_RESULTS_NAME>.
  3. Other arguments
    --patch: Whether the model is a patch-based model (patch size > 1).
    --detail: Print the results of SSIM and LPIPS.
    --randomness: Run five experiments and report the mean results.
    --temperature: Set the sampling temperature

Launch the evaluation

To launch your evaluation, you can use the following command:

  1. For benchmark datasets (EDSR-baseline-LINF, RDN-LINF, SwinIR-LINF)
# benchmark deterministic
sh scripts/test-benchmark-ours-t0.sh <MODEL_PATH> <GPU_ID>
# benchmark random sample
sh scripts/test-benchmark-ours-t.sh <MODEL_PATH> <GPU_ID> <TEMPERATURE>
  1. For DIV2K dataset (RRDB-LINF 3x3 patch)
# RRDB patch div2k deterministic
python test.py --config configs/test/test-fast-div2k-4.yaml --model <MODEL_PATH> --gpu <GPU_ID> --detail --temperature 0.0 --patch
# RRDB patch div2k random sample
python test.py --config configs/test/test-fast-div2k-4.yaml --model <MODEL_PATH> --gpu <GPU_ID> --detail --randomness --temperature <TEMPERATURE> --patch

Note

  1. For the evaluation of generative SR, we didn't use the same border-shaving method as in the arbitrary-scale SR evaluation since generative SR works didn't evaluate with that. You can modify the line 142 of utils.py from shave = scale + 6 to shave = scale in order to get the same scores reported in our paper (with only a negligible difference on PSNR).

Citation

If you find our work helpful for your research, we would greatly appreciate your assistance in sharing it with the community and citing it using the following BibTex. Thank you for supporting our research.

@inproceedings{yao2023local,
      title     = {Local Implicit Normalizing Flow for Arbitrary-Scale Image Super-Resolution},
      author    = {Jie-En Yao and Li-Yuan Tsao and Yi-Chen Lo and Roy Tseng and Chia-Che Chang and Chun-Yi Lee},
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
      year      = {2023},
}

Acknowledgements

Our code was built on LIIF and LTE. We would like to express our gratitude to the authors for generously sharing their code and contributing to the community.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published