Skip to content

dokosho02/DiffBIR

 
 

Repository files navigation

DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior

Paper | Project Page

visitors

Xinqi Lin1,*, Jingwen He2,*, Ziyan Chen2, Zhaoyang Lyu2, Ben Fei2, Bo Dai2, Wanli Ouyang2, Yu Qiao2, Chao Dong1,2

1Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
2Shanghai AI Laboratory

⭐If DiffBIR is helpful for you, please help star this repo. Thanks!🤗

📖Table Of Contents

👀Visual Results On Real-world Images

General Image Restoration

Face Image Restoration

⚙️Installation

# create a conda environment with python >= 3.9
conda create -n diffbir python=3.9
conda activate diffbir
# pytorch >= 1.12.1 with CUDA >= 11.3 (required by xformers)
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
# xformers 0.0.16
conda install xformers==0.0.16 -c xformers
# other dependencies
chmod a+x install_env.sh && ./install_env.sh

🧬Pretrained Models

Model Name Description
general_swinir_v1.ckpt Stage1 model (SwinIR) for general image restoration.
general_full_v1.ckpt Full model for general image restoration. "Full" means it contains both the stage1 and stage2 model.
face_swinir_v1.ckpt Stage1 model (SwinIR) for face restoration.
face_full_v1.ckpt Full model for face restoration.

🛫Quick Start

Download general_full_v1.ckpt and general_swinir_v1.ckpt, then run the following command to interact with the gradio website.

python gradio_diffbir.py --ckpt [full_ckpt_path] --config configs/model/cldm.yaml --reload_swinir --swinir_ckpt [swinir_ckpt_path]

⚔️Inference

Full Pipeline (Remove Degradations & Refine Details)

General Image

Download general_full_v1.ckpt and general_swinir_v1.ckpt, then put your low-quality (lq) images in lq_dir. If you are confused about where the reload_swinir option came from, please refer to the degradation details.

python inference.py --config configs/model/cldm.yaml --ckpt [full_ckpt_path] --reload_swinir --swinir_ckpt [swinir_ckpt_path] --steps 50 --input [lq_dir] --sr_scale 1 --image_size 512 --color_fix_type wavelet --resize_back --output [output_dir_path]

Face Image

Download face_full_v1.ckpt and put your low-quality (lq) images in lq_dir.

python inference.py --config configs/model/cldm.yaml --ckpt [full_ckpt_path] --steps 50 --input [lq_dir] --sr_scale 1 --image_size 512 --color_fix_type wavelet --resize_back --output [output_dir_path]

Only Stage1 Model (Remove Degradations)

Download general_swinir_v1.ckpt, face_swinir_v1.ckpt for general, face image respectively, and put your low-quality (lq) images in lq_dir:

python scripts/inference_stage1.py --config configs/model/swinir.yaml --ckpt [swinir_ckpt_path] --input [lq_dir] --sr_scale 1 --image_size 512 --output [output_dir_path]

Only Stage2 Model (Refine Details)

Since the proposed two-stage pipeline is very flexible, you can utilize other awesome models to remove degradations instead of SwinIR and then leverage the Stable Diffusion to refine details.

# step1: Use other models to remove degradations and save results in [img_dir_path].

# step2: Refine details of step1 outputs.
python inference.py --config configs/model/cldm.yaml --ckpt [full_ckpt_path] --steps 50 --input [img_dir_path] --sr_scale 1 --image_size 512 --color_fix_type wavelet --resize_back --output [output_dir_path] --disable_preprocess_model

🌠Train

Degradation Details

For general image restoration, we first train both the stage1 and stage2 model under codeformer degradation to enhance the generative capacity of the stage2 model. In order to improve the ability for degradation removal, we train another stage1 model under Real-ESRGAN degradation and utilize it during inference.

For face image restoration, we adopt the degradation model used in DifFace for training and directly utilize the SwinIR model released by them as our stage1 model.

Data Preparation

  1. Generate file list of training set and validation set.

    python scripts/make_file_list.py --img_folder [hq_dir_path] --val_size [validation_set_size] --save_folder [save_dir_path] --follow_links

    This script will collect all image files in img_folder and split them into training set and validation set automatically. You will get two file lists in save_folder, each line in a file list contains an absolute path of an image file:

    save_folder
    ├── train.list # training file list
    └── val.list   # validation file list
    
  2. Configure training set and validation set.

    For general image restoration, fill in the following configuration files with appropriate values.

    For face image restoration, fill in the face training set and validation set configuration files with appropriate values.

Train Stage1 Model

  1. Configure training-related information.

    Fill in the configuration file of training with appropriate values.

  2. Start training.

    python train.py --config [training_config_path]

    💡:Checkpoints of SwinIR will be used in training stage2 model.

Train Stage2 Model

  1. Download pretrained Stable Diffusion v2.1 to provide generative capabilities.

    wget https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.ckpt --no-check-certificate
  2. Create the initial model weights.

    python scripts/make_stage2_init_weight.py --cldm_config configs/model/cldm.yaml --sd_weight [sd_v2.1_ckpt_path] --swinir_weight [swinir_ckpt_path] --output [init_weight_output_path]

    You will see some outputs which show the weight initialization.

  3. Configure training-related information.

    Fill in the configuration file of training with appropriate values.

  4. Start training.

    python train.py --config [training_config_path]

🆕Update

  • 2023.08.30: Repo is released.

🧗TODO

  • Release code and pretrained models:computer:.
  • Update links to paper and project page:link:.
  • Provide Colab demo and HuggingFace demo:notebook:.
  • Improve the performance:superhero:.

Citation

Please cite us if our work is useful for your research.

@article{2023diffbir,
  author    = {Xinqi Lin, Jingwen He, Ziyan Chen, Zhaoyang Lyu, Ben Fei, Bo Dai, Wanli Ouyang, Yu Qiao, Chao Dong},
  title     = {DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior},
  journal   = {arxiv},
  year      = {2023},
}

License

This project is released under the Apache 2.0 license.

Acknowledgement

This project is based on ControlNet and BasicSR. Thanks for their awesome work.

Contact

If you have any questions, please feel free to contact with me at linxinqi@tju.edu.cn.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%