Skip to content

Commit

Permalink
Merge branch 'develop'
Browse files Browse the repository at this point in the history
  • Loading branch information
sanghyun-son committed Aug 21, 2019
2 parents 565d6fe + bf5034d commit 6f5426b
Show file tree
Hide file tree
Showing 4 changed files with 41 additions and 30 deletions.
35 changes: 21 additions & 14 deletions README.md
@@ -1,3 +1,7 @@
**About PyTorch 1.2.0**
* Now the master branch supports PyTorch 1.2.0 by default.
* Due to the serious version problem (especially torch.utils.data.dataloader), MDSR functions are temporarily disabled. If you have to train/evaluate the MDSR model, please use legacy branches.

# EDSR-PyTorch

**About PyTorch 1.1.0**
Expand All @@ -20,7 +24,7 @@ If you find our work useful in your research or publication, please cite our wor
year = {2017}
}
```
We provide scripts for reproducing all the results from our paper. You can train your own model from scratch, or use pre-trained model to enlarge your images.
We provide scripts for reproducing all the results from our paper. You can train your model from scratch, or use a pre-trained model to enlarge your images.

**Differences between Torch version**
* Codes are much more compact. (Removed all unnecessary parts.)
Expand All @@ -46,8 +50,8 @@ git clone https://github.com/thstkdgus35/EDSR-PyTorch
cd EDSR-PyTorch
```

## Quick start (Demo)
You can test our super-resolution algorithm with your own images. Place your images in ``test`` folder. (like ``test/<your_image>``) We support **png** and **jpeg** files.
## Quickstart (Demo)
You can test our super-resolution algorithm with your images. Place your images in ``test`` folder. (like ``test/<your_image>``) We support **png** and **jpeg** files.

Run the script in ``src`` folder. Before you run the demo, please uncomment the appropriate line in ```demo.sh``` that you want to execute.
```bash
Expand Down Expand Up @@ -123,17 +127,17 @@ sh demo.sh
* Basically, this function first split a large image to small patches. Those images are merged after super-resolution. I checked this function with 12GB memory, 4000 x 2000 input image in scale 4. (Therefore, the output will be 16000 x 8000.)

* Feb 21, 2018
* Fixed the problem when loading pre-trained multi-gpu model.
* Fixed the problem when loading pre-trained multi-GPU model.
* Added pre-trained scale 2 baseline model.
* This code now only saves the best-performing model by default. For MDSR, 'the best' can be ambiguous. Use --save_models argument to save all the intermediate models.
* This code now only saves the best-performing model by default. For MDSR, 'the best' can be ambiguous. Use --save_models argument to keep all the intermediate models.
* PyTorch 0.3.1 changed their implementation of DataLoader function. Therefore, I also changed my implementation of MSDataLoader. You can find it on feature/dataloader branch.

* Feb 23, 2018
* Now PyTorch 0.3.1 is default. Use legacy/0.3.0 branch if you use the old version.
* Now PyTorch 0.3.1 is a default. Use legacy/0.3.0 branch if you use the old version.
* With a new ``src/data/DIV2K.py`` code, one can easily create new data class for super-resolution.
* New binary data pack. (Please remove the ``DIV2K_decoded`` folder from your dataset if you have.)
* With ``--ext bin``, this code will automatically generates and saves the binary data pack that corresponds to previous ``DIV2K_decoded``. (This requires huge RAM (~45GB, Swap can be used.), so please be careful.)
* If you cannot make the binary pack, just use the default setting (``--ext img``).
* With ``--ext bin``, this code will automatically generate and saves the binary data pack that corresponds to previous ``DIV2K_decoded``. (This requires huge RAM (~45GB, Swap can be used.), so please be careful.)
* If you cannot make the binary pack, use the default setting (``--ext img``).

* Fixed a bug that PSNR in the log and PSNR calculated from the saved images does not match.
* Now saved images have better quality! (PSNR is ~0.1dB higher than the original code.)
Expand All @@ -146,23 +150,23 @@ sh demo.sh
* Mar 11, 2018
* Fixed some typos in the code and script.
* Now --ext img is default setting. Although we recommend you to use --ext bin when training, please use --ext img when you use --test_only.
* Skip_batch operation is implemented. Use --skip_threshold argument to skip the batch that you want to ignore. Although this function is not exactly same with that of Torch7 version, it will work as you expected.
* Skip_batch operation is implemented. Use --skip_threshold argument to skip the batch that you want to ignore. Although this function is not exactly the same with that of Torch7 version, it will work as you expected.

* Mar 20, 2018
* Use ``--ext sep_reset`` to pre-decode large png files. Those decoded files will be saved to the same directory with DIV2K png files. After the first run, you can use ``--ext sep`` to save time.
* Use ``--ext sep-reset`` to pre-decode large png files. Those decoded files will be saved to the same directory with DIV2K png files. After the first run, you can use ``--ext sep`` to save time.
* Now supports various benchmark datasets. For example, try ``--data_test Set5`` to test your model on the Set5 images.
* Changed the behavior of skip_batch.

* Mar 29, 2018
* We now provide all models from our paper.
* We also provide ``MDSR_baseline_jpeg`` model that suppresses JPEG artifacts in original low-resolution image. Please use it if you have any trouble.
* We also provide ``MDSR_baseline_jpeg`` model that suppresses JPEG artifacts in the original low-resolution image. Please use it if you have any trouble.
* ``MyImage`` dataset is changed to ``Demo`` dataset. Also, it works more efficient than before.
* Some codes and script are re-written.

* Apr 9, 2018
* VGG and Adversarial loss is implemented based on [SRGAN](http://openaccess.thecvf.com/content_cvpr_2017/papers/Ledig_Photo-Realistic_Single_Image_CVPR_2017_paper.pdf). [WGAN](https://arxiv.org/abs/1701.07875) and [gradient penalty](https://arxiv.org/abs/1704.00028) are also implemented, but they are not tested yet.
* Many codes are refactored. If there exists a bug, please report it.
* [D-DBPN](https://arxiv.org/abs/1803.02735) is implemented. Default setting is D-DBPN-L.
* [D-DBPN](https://arxiv.org/abs/1803.02735) is implemented. The default setting is D-DBPN-L.

* Apr 26, 2018
* Compatible with PyTorch 0.4.0
Expand All @@ -171,9 +175,12 @@ sh demo.sh

* July 22, 2018
* Thanks for recent commits that contains RDN and RCAN. Please see ``code/demo.sh`` to train/test those models.
* Now the dataloader is much stable than the previous version. Please erase ``DIV2K/bin`` folder that is created before this commit. Also, please avoid to use ``--ext bin`` argument. Our code will automatically pre-decode png images before training. If you do not have enough spaces(~10GB) in your disk, we recommend ``--ext img``(But SLOW!).
* Now the dataloader is much stable than the previous version. Please erase ``DIV2K/bin`` folder that is created before this commit. Also, please avoid using ``--ext bin`` argument. Our code will automatically pre-decode png images before training. If you do not have enough spaces(~10GB) in your disk, we recommend ``--ext img``(But SLOW!).

* Oct 18, 2018
* with ``--pre_train download``, pretrained models will be automatically downloaded from server.
* with ``--pre_train download``, pretrained models will be automatically downloaded from the server.
* Supports video input/output (inference only). Try with ``--data_test video --dir_demo [video file directory]``.

* About PyTorch 1.0.0
* We support PyTorch 1.0.0. If you prefer the previous versions of PyTorch, use legacy branches.
* ``--ext bin`` is not supported. Also, please erase your bin files with ``--ext sep-reset``. Once you successfully build those bin files, you can remove ``-reset`` from the argument.
26 changes: 14 additions & 12 deletions src/data/__init__.py
@@ -1,5 +1,6 @@
from importlib import import_module
from dataloader import MSDataLoader
#from dataloader import MSDataLoader
from torch.utils.data import dataloader
from torch.utils.data import ConcatDataset

# This is a simple wrapper function for ConcatDataset
Expand All @@ -22,12 +23,12 @@ def __init__(self, args):
m = import_module('data.' + module_name.lower())
datasets.append(getattr(m, module_name)(args, name=d))

self.loader_train = MSDataLoader(
args,
self.loader_train = dataloader.DataLoader(
MyConcatDataset(datasets),
batch_size=args.batch_size,
shuffle=True,
pin_memory=not args.cpu
pin_memory=not args.cpu,
num_workers=args.n_threads,
)

self.loader_test = []
Expand All @@ -40,11 +41,12 @@ def __init__(self, args):
m = import_module('data.' + module_name.lower())
testset = getattr(m, module_name)(args, train=False, name=d)

self.loader_test.append(MSDataLoader(
args,
testset,
batch_size=1,
shuffle=False,
pin_memory=not args.cpu
))

self.loader_test.append(
dataloader.DataLoader(
testset,
batch_size=1,
shuffle=False,
pin_memory=not args.cpu,
num_workers=args.n_threads,
)
)
2 changes: 1 addition & 1 deletion src/demo.sh
@@ -1,5 +1,5 @@
# EDSR baseline model (x2) + JPEG augmentation
#python main.py --model EDSR --scale 2 --patch_size 96 --save edsr_baseline_x2 --reset
python main.py --model EDSR --scale 2 --patch_size 96 --save edsr_baseline_x2 --reset
#python main.py --model EDSR --scale 2 --patch_size 96 --save edsr_baseline_x2 --reset --data_train DIV2K+DIV2K-Q75 --data_test DIV2K+DIV2K-Q75

# EDSR baseline model (x3) - from EDSR baseline model (x2)
Expand Down
8 changes: 5 additions & 3 deletions src/trainer.py
Expand Up @@ -37,13 +37,15 @@ def train(self):
self.model.train()

timer_data, timer_model = utility.timer(), utility.timer()
for batch, (lr, hr, _, idx_scale) in enumerate(self.loader_train):
# TEMP
self.loader_train.dataset.set_scale(0)
for batch, (lr, hr, _,) in enumerate(self.loader_train):
lr, hr = self.prepare(lr, hr)
timer_data.hold()
timer_model.tic()

self.optimizer.zero_grad()
sr = self.model(lr, idx_scale)
sr = self.model(lr, 0)
loss = self.loss(sr, hr)
loss.backward()
if self.args.gclip > 0:
Expand Down Expand Up @@ -84,7 +86,7 @@ def test(self):
for idx_data, d in enumerate(self.loader_test):
for idx_scale, scale in enumerate(self.scale):
d.dataset.set_scale(idx_scale)
for lr, hr, filename, _ in tqdm(d, ncols=80):
for lr, hr, filename in tqdm(d, ncols=80):
lr, hr = self.prepare(lr, hr)
sr = self.model(lr, idx_scale)
sr = utility.quantize(sr, self.args.rgb_range)
Expand Down

0 comments on commit 6f5426b

Please sign in to comment.