Skip to content

ming053l/DRCT

Repository files navigation

DRCT: Saving Image Super-resolution away from Information Bottleneck

PWC PWC PWC PWC

Chih-Chung Hsu, Chia-Ming Lee, Yi-Shiuan Chou

Advanced Computer Vision LAB, National Cheng Kung University

Overview

Benchmark results on SRx4 without x2 pretraining. Mulit-Adds are calculated for a 64x64 input.

Model Params Multi-Adds Forward FLOPs Set5 Set14 BSD100 Urban100 Manga109
HAT 9.62M 11.22G 2053M 42.18G 33.04 29.23 28.00 27.97 32.48
DRCT 10.44M 5.92G 1857M 7.92G 33.11 29.35 28.18 28.06 32.59
HAT-L 40.84M 76.69G 5165M 79.60G 33.30 29.47 28.09 28.60 33.09
DRCT-L 27.58M 9.20G 4278M 11.07G 33.37 29.54 28.16 28.70 33.14

Updates

  • ✅ 2024-03-31: Release the first version of the paper at Arxiv.

  • ✅ 2024-04-14: DRCT is accepted by NTIRE 2024, CVPR.

  • The pretrained model will be released.

  • MambaDRCT will be released.

  • Real_DRCT_GAN will be released.

Environment

Installation

git clone https://github.com/ming053l/DRCT.git
conda create --name drct python=3.8 -y
conda activate drct
# CUDA 11.6
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
cd DRCT
pip install -r requirements.txt
python setup.py develop

How To Test

  • Refer to ./options/test for the configuration file of the model to be tested, and prepare the testing data and pretrained model.
  • Then run the following codes (taking DRCT_SRx4_ImageNet-pretrain.pth as an example):
python drct/test.py -opt options/test/DRCT_SRx4_ImageNet-pretrain.yml

The testing results will be saved in the ./results folder.

  • Refer to ./options/test/DRCT_SRx4_ImageNet-LR.yml for inference without the ground truth image.

Note that the tile mode is also provided for limited GPU memory when testing. You can modify the specific settings of the tile mode in your custom testing option by referring to ./options/test/DRCT_tile_example.yml.

How To Train

  • Refer to ./options/train for the configuration file of the model to train.
  • Preparation of training data can refer to this page. ImageNet dataset can be downloaded at the official website.
  • Validation data can be download at this page.
  • The training command is like
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 drct/train.py -opt options/train/train_DRCT_SRx2_from_scratch.yml --launcher pytorch

The training logs and weights will be saved in the ./experiments folder.

Citations

BibTeX

@misc{hsu2024drct,
  title={DRCT: Saving Image Super-resolution away from Information Bottleneck}, 
  author={Chih-Chung Hsu and Chia-Ming Lee and Yi-Shiuan Chou},
  year={2024},
  eprint={2404.00722},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}

Thanks

A part of our work has been facilitated by the HAT framework, and we are grateful for its outstanding contribution.

Contact

If you have any question, please email zuw408421476@gmail.com to discuss with the author.

About

Accepted by New Trends in Image Restoration and Enhancement workshop (NTIRE), in conjunction with CVPR 2024.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages