Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation #1

Closed
ghost opened this issue Mar 15, 2020 · 7 comments
Closed

Documentation #1

ghost opened this issue Mar 15, 2020 · 7 comments

Comments

@ghost
Copy link

ghost commented Mar 15, 2020

@Elin24
Documentation on how to train and inference

@Elin24
Copy link
Owner

Elin24 commented Mar 20, 2020

in src, there is a file demo.sh, which exhibits how to train and test.

@Elin24
Copy link
Owner

Elin24 commented Mar 20, 2020

By the way, the core part of PSPL is the way it processes loss. So it is enough for you to comprehend the code in README while referring to paper.

@drcdr
Copy link

drcdr commented Apr 7, 2020

There are several different EDSR training commands in demo.sh. Which one (or sequence) did you use to get the results in your paper? Here are the EDSR-based commands that are not test-only:

# ---------- EDSR baseline model (x2) + JPEG augmentation ----------
#python main.py --model EDSR --scale 4 --save edsr_x4 --reset --data_test Set5+Set14+B100+Urban100+DIV2K --n_GPUs 1 --epochs 300 --dir_data ../../datasetx4 --reset
#python main.py --model EDSR --scale 4 --save edsr_x4_spl --reset --data_test Set5+Set14+B100+Urban100+DIV2K --n_GPUs 1 --epochs 300 --dir_data ../../datasetx4 --reset

#python main.py --model EDSR --scale 2 --patch_size 96 --save edsr_baseline_x2 --reset --data_train DIV2K+DIV2K-Q75 --data_test DIV2K+DIV2K-Q75

# ---------- EDSR baseline model (x3) - from EDSR baseline model (x2) ----------
#python main.py --model EDSR --scale 3 --patch_size 144 --save edsr_baseline_x3 --reset --pre_train [pre-trained EDSR_baseline_x2 model dir]

# ---------- EDSR baseline model (x4) - from EDSR baseline model (x2) ----------
#python main.py --model EDSR --scale 4 --save edsr_baseline_x4 --reset --pre_train [pre-trained EDSR_baseline_x2 model dir]

# ---------- EDSR in the paper (x2) ----------
#python main.py --template EDSR_paper --scale 2 --save edsr_x2_spl_1011 --n_GPUs 1 --patch_size 96 --data_test Set5+Set14+B100+Urban100 --dir_data ../../datasetx2 --resume -1

# ---------- EDSR in the paper (x3) - from EDSR (x2) ---------- 
#python main.py --template EDSR_paper --scale 3 --save edsr_spl_x3_1012 --reset --n_GPUs 1 --patch_size 144 --data_test Set5+Set14+B100+Urban100 --dir_data ../../datasetx3 --pre_train /media/E/linwei/SISR/EDSR-PyTorch-SPL/experiment/edsr_x2_spl_1011/model/model_best.pt
#python main.py --model EDSR --scale 3 --save edsr_x3_spl_1012 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --reset --pre_train [pre-trained EDSR model dir]

# ---------- EDSR in the paper (x4) - from EDSR (x2) ----------
#python main.py --model EDSR --scale 4 --save edsr_x4 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --reset --pre_train [pre-trained EDSR_x2 model dir]
#python main.py --template EDSR_paper --scale 4 --save edsr_x4_spl_1013 --n_GPUs 1 --patch_size 192 --data_test Set5+Set14+B100+Urban100 --dir_data ../../datasetx4 --resume -1

@Elin24
Copy link
Owner

Elin24 commented Apr 9, 2020

For ablation study, it is

python3 main.py --model EDSR --scale 2 --save edsr_basel_PSPL_x2 --reset --n_GPUs 1 --patch_size 96 --data_train DIV2K --data_test DIV2K --splalpha 1

For the EDSR with the best results, the commands are:

# scale = 2
python3 main.py --template EDSR_paper --scale 2 --save edsr_best_PSPL_x2 --reset --n_GPUs 1 --patch_size 96 --data_test Set5+Set14+B100+Urban100 --dir_data ../../datasetx2 --splalpha 1
# scale = 3
python3 main.py --template EDSR_paper --scale 3 --save edsr_best_PSPL_x3 --reset --n_GPUs 1 --patch_size 144 --data_test Set5+Set14+B100+Urban100 --dir_data ../../datasetx3 --splalpha 1
# scale = 4
python3 main.py --template EDSR_paper --scale 4 --save edsr_best_PSPL_x3 --reset --n_GPUs 1 --patch_size 192 --data_test Set5+Set14+B100+Urban100 --dir_data ../../datasetx4 --pre_train ../experiment/edsr_best_PSPL_x2/model/model_latest.pt --splalpha 1

@drcdr
Copy link

drcdr commented Apr 9, 2020

Great, thank you @Elin24 for the clarification. Three questions:

  1. For the scale=4 command, I guess it's --save edsr_best_PSPL_x4?
  2. For the scale=3 command, do you / could you use edsr_best_PSPL_x2 as pretraining?
  3. I also asked this here; can you please clarify on your image file setup, and the datasetx2, datasetx3, and datasetx4 directories?

@Elin24
Copy link
Owner

Elin24 commented Apr 10, 2020

Q1: Yes, it should be --save edsr_best_PSPL_x4;

Q2: When I first run the experiment, all settings do not contain the pre_train. However, scale x4 can not obtain the results in the original EDSR paper, then I find their paper uses pre_train. So I also add pre_train. Scale x3 achieves its line easily, so I do not add pre_train to it.

Q3: the document tree can be inferred from the dataloader, for datasetx2, it is like:

datasetx2
├── benchmark
│   ├── B100
│   │   ├── HR
│   │   │   ├── 101085.png
│   │   │   ├── 101087.png
│   │   │   └── ...
│   │   └── LR_bicubic
│   │       └── X2
│   │           ├── 101085x2.png
│   │           ├── 101087x2.png
│   │           └── ...
│   ├── Set14
│   │   ├── bin
│   │   ├── HR
│   │   │   ├── baboon.png
│   │   │   ├── barbara.png
│   │   │   └── ...
│   │   └── LR_bicubic
│   │       └── X2
│   │           ├── baboonx2.png
│   │           ├── barbarax2.png
│   │           └── ...
│   ├── Set5
│   │   ├── bin
│   │   ├── HR
│   │   │   ├── baby.png
│   │   │   └── ...
│   │   └── LR_bicubic
│   │       └── X2
│   │           ├── babyx2.png
│   │           └── ...
│   └── Urban100
│       ├── bin
│       ├── HR
│       │   ├── img_001.png
│       │   ├── img_002.png
│       │   └── ...
│       └── LR_bicubic
│           └── X2
│               ├── img_001x2.png
│               ├── img_002x2.png
│               └── ...
└── DIV2K
    └── ...

datasetx3 anddatasetx4 are similar to the above structure.
benchmark should be organized by yourself, since the datasets in it only contain HR images, and the LR images are obtained through BICUBIC. The 'DIV2K' part is what you download (if we use the same hyperlink).

The reason why I do not combine them that the width and height of HR images in the combined fold should be divisible by 12, which leads some HR images to be cropped too much (Set 5 and Set 14).

@Elin24
Copy link
Owner

Elin24 commented Apr 10, 2020

For generating the document tree, I share the code at makedata.py for benchmark directory. As for DIV2K, you can run ln -s to create symbolic link for saving storage.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants