Download: ⏬ Google Drive: Pretrained Models | Reproduced Experiments ⏬ 百度网盘: 预训练模型 | 复现实验
We provide:
- Official models converted directly from official released models
- Reproduced models with
BasicSR
. Pre-trained models and log examples are provided
You can put the downloaded models in the experiments/pretrained_models
folder.
[Download official pre-trained models] (Google Drive, 百度网盘)
You can use the script to download pre-trained models from Google Drive.
python scripts/download_pretrained_models.py ESRGAN
# method can be ESRGAN, EDVR, StyleGAN, EDSR, DUF, DFDNet, dlib
[Download reproduced models and logs] (Google Drive, 百度网盘)
In addition, we upload the training process and curves in wandb.
When evaluation:
- We crop
scale
border pixels in each border - Evaluated on RGB channels
Exp Name | Set5 (PSNR/SSIM) | Set14 (PSNR/SSIM) | DIV2K100 (PSNR/SSIM) |
---|---|---|---|
EDSR_Mx2_f64b16_DIV2K_official-3ba7b086 | 35.7768 / 0.9442 | 31.4966 / 0.8939 | 34.6291 / 0.9373 |
EDSR_Mx3_f64b16_DIV2K_official-6908f88a | 32.3597 / 0.903 | 28.3932 / 0.8096 | 30.9438 / 0.8737 |
EDSR_Mx4_f64b16_DIV2K_official-0c287733 | 30.1821 / 0.8641 | 26.7528 / 0.7432 | 28.9679 / 0.8183 |
EDSR_Lx2_f256b32_DIV2K_official-be38e77d | 35.9979 / 0.9454 | 31.8583 / 0.8971 | 35.0495 / 0.9407 |
EDSR_Lx3_f256b32_DIV2K_official-3660f70d | 32.643 / 0.906 | 28.644 / 0.8152 | 31.28 / 0.8798 |
EDSR_Lx4_f256b32_DIV2K_official-76ee1c8f | 30.5499 / 0.8701 | 27.0011 / 0.7509 | 29.277 / 0.8266 |
Experiment name conventions are in Config.md.
Exp Name | Set5 (PSNR/SSIM) | Set14 (PSNR/SSIM) | DIV2K100 (PSNR/SSIM) |
---|---|---|---|
001_MSRResNet_x4_f64b16_DIV2K_1000k_B16G1_wandb | 30.2468 / 0.8651 | 26.7817 / 0.7451 | 28.9967 / 0.8195 |
002_MSRResNet_x2_f64b16_DIV2K_1000k_B16G1_001pretrain_wandb | 35.7483 / 0.9442 | 31.5403 / 0.8937 | 34.6699 / 0.9377 |
003_MSRResNet_x3_f64b16_DIV2K_1000k_B16G1_001pretrain_wandb | 32.4038 / 0.9032 | 28.4418 / 0.8106 | 30.9726 / 0.8743 |
004_MSRGAN_x4_f64b16_DIV2K_400k_B16G1_wandb | 28.0158 / 0.8087 | 24.7474 / 0.6623 | 26.6504 / 0.7462 |
201_EDSR_Mx2_f64b16_DIV2K_300k_B16G1_wandb | 35.7395 / 0.944 | 31.4348 / 0.8934 | 34.5798 / 0.937 |
202_EDSR_Mx3_f64b16_DIV2K_300k_B16G1_201pretrain_wandb | 32.315 / 0.9026 | 28.3866 / 0.8088 | 30.9095 / 0.8731 |
203_EDSR_Mx4_f64b16_DIV2K_300k_B16G1_201pretrain_wandb | 30.1726 / 0.8641 | 26.721 / 0.743 | 28.9506 / 0.818 |
204_EDSR_Lx2_f256b32_DIV2K_300k_B16G1_wandb | 35.9792 / 0.9453 | 31.7284 / 0.8959 | 34.9544 / 0.9399 |
205_EDSR_Lx3_f256b32_DIV2K_300k_B16G1_204pretrain_wandb | 32.6467 / 0.9057 | 28.6859 / 0.8152 | 31.2664 / 0.8793 |
206_EDSR_Lx4_f256b32_DIV2K_300k_B16G1_204pretrain_wandb | 30.4718 / 0.8695 | 26.9616 / 0.7502 | 29.2621 / 0.8265 |
In the evaluation, we include all the input frames and do not crop any border pixels unless otherwise stated.
We do not use the self-ensemble (flip testing) strategy and any other post-processing methods.
Name convention
EDVR_(training dataset)_(track name)_(model complexity)
- track name. There are four tracks in the NTIRE 2019 Challenges on Video Restoration and Enhancement:
- SR: super-resolution with a fixed downsampling kernel (MATLAB bicubic downsampling kernel is frequently used). Most of the previous video SR methods focus on this setting.
- SRblur: the inputs are also degraded with motion blur.
- deblur: standard deblurring (motion blur).
- deblurcomp: motion blur + video compression artifacts.
- model complexity
- L (Large): # of channels = 128, # of back residual blocks = 40. This setting is used in our competition submission.
- M (Moderate): # of channels = 64, # of back residual blocks = 10.
Model name | [Test Set] PSNR/SSIM |
---|---|
EDVR_Vimeo90K_SR_L | [Vid4] (Y1) 27.35/0.8264 [↓Results] (RGB) 25.83/0.8077 |
EDVR_REDS_SR_M | [REDS] (RGB) 30.53/0.8699 [↓Results] |
EDVR_REDS_SR_L | [REDS] (RGB) 31.09/0.8800 [↓Results] |
EDVR_REDS_SRblur_L | [REDS] (RGB) 28.88/0.8361 [↓Results] |
EDVR_REDS_deblur_L | [REDS] (RGB) 34.80/0.9487 [↓Results] |
EDVR_REDS_deblurcomp_L | [REDS] (RGB) 30.24/0.8567 [↓Results] |
1 Y or RGB denotes the evaluation on Y (luminance) or RGB channels.
Model name | [Test Set] PSNR/SSIM |
---|---|
EDVR_REDS_SR_Stage2 | [REDS] (RGB) / [↓Results] |
EDVR_REDS_SRblur_Stage2 | [REDS] (RGB) / [↓Results] |
EDVR_REDS_deblur_Stage2 | [REDS] (RGB) / [↓Results] |
EDVR_REDS_deblurcomp_Stage2 | [REDS] (RGB) / [↓Results] |
The models are converted from the officially released models.
Model name | [Test Set] PSNR/SSIM1 | Official Results2 |
---|---|---|
DUF_x4_52L_official3 | [Vid4] (Y4) 27.33/0.8319 [↓Results] (RGB) 25.80/0.8138 |
(Y) 27.33/0.8318 [↓Results] (RGB) 25.79/0.8136 |
DUF_x4_28L_official | [Vid4] | |
DUF_x4_16L_official | [Vid4] | |
DUF_x3_16L_official | [Vid4] | |
DUF_x2_16L_official | [Vid4] |
1 We crop eight pixels near image boundary for DUF due to its severe boundary effects.
2 The official results are obtained by running the official codes and models.
3 Different from the official codes, where zero padding
is used for border frames, we use new_info
strategy.
4 Y or RGB denotes the evaluation on Y (luminance) or RGB channels.
The models are converted from the officially released models.
Model name | [Test Set] PSNR/SSIM | Official Results1 |
---|---|---|
TOF_official2 | [Vid4] (Y3) 25.86/0.7626 [↓Results] (RGB) 24.38/0.7403 |
(Y) 25.89/0.7651 [↓Results] (RGB) 24.41/0.7428 |
1 The official results are obtained by running the official codes and models. Note that TOFlow does not provide a strategy for border frame recovery and we simply use a replicate
strategy for border frames.
2 The converted model has slightly different results, due to different implementation. And we use new_info
strategy for border frames.
3 Y or RGB denotes the evaluation on Y (luminance) or RGB channels.