Skip to content

Dreamzz5/Simple-Align

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CVPR2024-Navigating Beyond Dropout: An Intriguing Solution towards Generalizable Image Super-Resolution

Abstract

Deep learning has led to a dramatic leap on Single Image Super-Resolution (SISR) performances in recent years. While most existing work assumes a simple and fixed degradation model (e.g., bicubic downsampling), the research of Blind SR seeks to improve model generalization ability with unknown degradation. Recently, Kong et al. pioneer the investigation of a more suitable training strategy for Blind SR using Dropout. Although such method indeed brings substantial generalization improvements via mitigating overfitting, we argue that Dropout simultaneously introduces undesirable side-effect that compromises model's capacity to faithfully reconstruct fine details. We show both the theoretical and experimental analyses in our paper, and furthermore, we present another easy yet effective training strategy that enhances the generalization ability of the model by simply modulating its first and second-order features statistics. Experimental results have shown that our method could serve as a model-agnostic regularization and outperforms Dropout on seven benchmark datasets including both synthetic and real-world scenarios.

Installation

  1. Install dependent packages
  1. Download the testing datasets (Set5, Set14, B100, Manga109, Urban100) and move them to ./dataset/benchmark. Google Drive or Baidu Drive (Password: basr) .

  2. Add degradations to testing datasets.

cd ./dataset
python add_degradations.py
  1. Download pretrained models and move them to ./pretrained_models/ folder.

    To remain the setting of Real-ESRGAN, we use the GT USM (sharpness) in the paper. But we also provide the models without USM, the improvement is basically same.

  2. Run the testing commands.

CUDA_VISIBLE_DEVICES=1 python realesrgan/test.py -opt options/test/test_realsrresnet.yml
CUDA_VISIBLE_DEVICES=1 python realesrgan/test.py -opt options/test/test_realsrresnet_reg.yml
  1. The output results will be sorted in ./results.

How to train Real-SRResNet (w/ or w/o) reg

Some steps require replacing your local paths.

  1. Move to experiment dir.
cd Real-train
  1. Download the training datasets(DIV2K), move it to ./dataset and validation dataset(Set5), move it to ./dataset/benchmark.

  2. Run the training commands.

cd codes
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=4321 realesrgan/train.py -opt options/train/train_realsrresnet.yml --launcher pytorch --auto_resume
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=4321 realesrgan/train.py -opt options/train/train_realsrresnet_reg.yml --launcher pytorch --auto_resume
  1. The experiments will be sorted in ./experiments.

Acknowledgement

Many parts of this code is adapted from:

We thank the authors for sharing codes for their great works.