Skip to content
TensorFlow implementation of 'Enhanced Deep Residual Networks for Single Image Super-Resolution'.
Branch: master
Clone or download
Latest commit d63343a Oct 2, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
images Improve repo understanding Aug 5, 2019
models Add x4 trained model Aug 10, 2019
LICENSE Initial commit Jul 8, 2019 Update Oct 1, 2019 Fix 'Killed' error Aug 1, 2019 Fix 'Killed' error Aug 1, 2019 Add x4 trained model Aug 10, 2019 Add x4 trained model Aug 10, 2019

EDSR in Tensorflow

TensorFlow implementation of Enhanced Deep Residual Networks for Single Image Super-Resolution[1].

It was trained on the Div2K dataset - Train Data (HR images).

Google Summer of Code with OpenCV

This repository was made during the 2019 GSoC program for the organization OpenCV. The trained models (.pb files) can easily be used for inference in OpenCV with the 'dnn_superres' module. See the OpenCV documentation for how to do this.


  • tensorflow
  • numpy
  • cv2


This is the EDSR model, which has a different model for each scale. Architecture shown below. Go to branch 'mdsr' for the MDSR model.

Alt text


Download Div2K dataset. If you want to use another dataset, you will have to calculate the mean of that dataset, and set the new mean in ''. Code for calculating the mean can be found in


  • from scratch python --train --fromscratch --scale <scale> --traindir /path-to-train-images/

  • resume/load previous python --train --scale <scale> --traindir /path-to-train-images/

Test (compares edsr with bicubic with PSNR metric): python --test --scale <scale> --image /path-to-image/

Upscale (with edsr): python --upscale --scale <scale> --image /path-to-image/

Export to .pb python --export --scale <scale>

Extra arguments (Nr of resblocks, filters, batch, lr etc.) python --help


(1) Original picture
(2) Input image
(3) Bicubic scaled (3x) image
(4) EDSR scaled (3x) image
Alt text Alt text Alt text Alt text


The .pb files in these repository are quantized. This is done purely to shrink the filesizes down from ~150MB to ~40MB, because GitHub does not allow uploads above 100MB. The performance loss due to quantization is minimal. (To quantize during exporting use $ --quant <1,2 or 3> (2 is recommended.))


[1] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee, "Enhanced Deep Residual Networks for Single Image Super-Resolution," 2nd NTIRE: New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution in conjunction with CVPR 2017. [PDF] [arXiv] [Slide]

You can’t perform that action at this time.