Skip to content

PyTorch source code for "Multiscale Guided Coarse-to-Fine Network for Screenshot Demoiréing"

Notifications You must be signed in to change notification settings

nhduong/guided_demoireing_net

Repository files navigation


Multiscale Coarse-to-Fine Guided Screenshot Demoiréing

Duong Hai Nguyen1, Se-Ho Lee2, and Chul Lee1
1Department of Multimedia Engineering, Dongguk University, South Korea
2Department of Information and Engineering, Jeonbuk National University, South Korea
IEEE Signal Processing Letters 2023

Project Page · Paper




Installation

  1. Clone this repo:
git clone https://github.com/nhduong/guided_adaptive_demoireing.git
  1. Install dependencies:
conda create -n <environment-name> --file requirements.txt
  1. Download datasets
Dataset Download Link
LCDMoiré CodaLab
TIP2018 Google Drive
FHDMi Google Drive
UHDM Google Drive

Testing

  1. Download pretrained models from Google Drive

  2. Execute the following commands

# for LCDMoiré
CUDA_VISIBLE_DEVICES="GPU_ID" accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --evaluate \
      --data_path "path_to/aim2019_demoireing_track1" \
      --data_name aim --train_dir "train" --test_dir "val" --moire_dir "moire" --clean_dir "clear" \
      --resume "path_to/aim/checkpoint.pth.tar"

# for TIP2018
CUDA_VISIBLE_DEVICES="GPU_ID" accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --evaluate \
      --data_path "path_to/TIP2018_original" \
      --data_name tip18 --train_dir "trainData" --test_dir "testData" --moire_dir "source" --clean_dir "target" \
      --resume "path_to/tip18/checkpoint.pth.tar"

# for FHDMi
CUDA_VISIBLE_DEVICES="GPU_ID" accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --evaluate \
      --data_path "path_to/FHDMi_complete" \
      --data_name fhdmi --train_dir "train" --test_dir "test" --moire_dir "source" --clean_dir "target" \
      --resume "path_to/fhdmi/checkpoint.pth.tar" --num_branches 4

# for UHDM
CUDA_VISIBLE_DEVICES="GPU_ID" accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --evaluate \
        --data_path "path_to/UHDM_DATA" \
        --data_name uhdm --train_dir "train" --test_dir "test" --moire_dir "" --clean_dir "" \
        --resume "path_to/uhdm/checkpoint.pth.tar" --num_branches 4

Training

  1. Run the following commands
# for LCDMoiré
CUDA_VISIBLE_DEVICES="GPU_ID" nohup accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --dont_calc_mets_at_all --log2file \
    --data_path "path_to/aim2019_demoireing_track1" \
    --data_name aim --train_dir "train" --test_dir "val" --moire_dir "moire" --clean_dir "clear" \
    --batch_size 2 --T_0 50 --epochs 200 --init_weights &

# for TIP2018
CUDA_VISIBLE_DEVICES="GPU_ID" nohup accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --dont_calc_mets_at_all --log2file \
    --data_path "path_to/TIP2018_original" \
    --data_name tip18 --train_dir "trainData" --test_dir "testData" --moire_dir "source" --clean_dir "target" \
    --batch_size 2 --T_0 10 --epochs 80 --init_weights &

# for FHDMi
CUDA_VISIBLE_DEVICES="GPU_ID" nohup accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --dont_calc_mets_at_all --log2file \
    --data_path "path_to/FHDMi_complete" \
    --data_name fhdmi --train_dir "train" --test_dir "test" --moire_dir "source" --clean_dir "target" \
    --batch_size 2 --T_0 50 --epochs 200 --init_weights --num_branches 4 &

# for UHDM
CUDA_VISIBLE_DEVICES="GPU_ID" nohup accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --dont_calc_mets_at_all --log2file \
    --data_path "path_to/UHDM_DATA" \
    --data_name uhdm --train_dir "train" --test_dir "test" --moire_dir "" --clean_dir "" \
    --batch_size 2 --T_0 50 --epochs 200 --init_weights --num_branches 4 &
  1. Monitoring the training process via tail -f outputs/<path_to_the_experiment>/<experiment_name>.log
  2. Finding the best checkpoints from the last training steps
# for LCDMoiré
for epoch in {190..199} ; do
  CUDA_VISIBLE_DEVICES="GPU_ID" accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --evaluate --log2file \
        --data_path "path_to/aim2019_demoireing_track1" \
        --data_name aim --train_dir "train" --test_dir "val" --moire_dir "moire" --clean_dir "clear" \
        --resume "path_to/0${epoch}_checkpoint.pth.tar"
done

# for TIP2018
for epoch in {70..79} ; do
  CUDA_VISIBLE_DEVICES="GPU_ID" accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --evaluate --log2file \
        --data_path "path_to/TIP2018_original" \
        --data_name tip18 --train_dir "trainData" --test_dir "testData" --moire_dir "source" --clean_dir "target" \
        --resume "path_to/0${epoch}_checkpoint.pth.tar"
done

# for FHDMi
for epoch in {190..199} ; do
  CUDA_VISIBLE_DEVICES="GPU_ID" accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --evaluate --log2file \
        --data_path "path_to/FHDMi_complete" \
        --data_name fhdmi --train_dir "train" --test_dir "test" --moire_dir "source" --clean_dir "target" \
        --resume "path_to/0${epoch}_checkpoint.pth.tar" --num_branches 4
done

# for UHDM
for epoch in {190..199} ; do
  CUDA_VISIBLE_DEVICES="GPU_ID" accelerate launch --config_file default_config.yaml main.py --affine --l1loss --adaloss --perloss --evaluate --log2file \
          --data_path "path_to/UHDM_DATA" \
          --data_name uhdm --train_dir "train" --test_dir "test" --moire_dir "" --clean_dir "" \
          --resume "path_to/0${epoch}_checkpoint.pth.tar" --num_branches 4
done

Citation

If you find this work useful for your research, please cite our paper:

@article{2023_nguyen_gad,
  author  = {Duong Hai, Nguyen and Se-Ho, Lee and Chul, Lee},
  title   = {Multiscale Coarse-to-Fine Guided Screenshot Demoiréing},
  journal = {IEEE Signal Processing Letters},
  volume  = {30},
  number  = {},
  pages   = {898-902},
  month   = jul,
  year    = {2023},
  doi     = {10.1109/LSP.2023.3296039}
}

The code is released under the MIT license. See LICENSE for additional details.

Acknowledgements

This code is built on ImageNet training in PyTorch and UHDM.

About

PyTorch source code for "Multiscale Guided Coarse-to-Fine Network for Screenshot Demoiréing"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published