Skip to content
A deep learning approach to improve the resolution of images
Branch: master
Clone or download

Latest commit

Fetching latest commit…
Cannot retrieve the latest commit at this time.


Type Name Latest commit message Commit time
Failed to load latest commit information.

DeepDepixelate 🕵️‍♀️

We’ve all seen that moment in a crime thriller where the hero asks the tech guy to zoom and enhance an image, number plates become readable, pixelated faces become clear and whatever evidence needed to solve the case is found. And, we’ve all scoffed, laughed and muttered something under our breath about how lost information can’t be restored. Not any more. Well, sort of. It turns out the information is only partly lost. In a similar way that as humans we might infer the detail of a blurry image based on what we know about the world, now we can successfully apply the same logic to images to recover ‘photorealistic’ details lost to resolution effects. This is the essence of Super Resolution, unlocking information on the sub pixel scale through a complicated understanding of the translation of low to high resolution images.


To achieve this we implement Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

Generative adversarial networks (GANs) provide a powerful framework for generating plausible-looking natural images with high perceptual quality. The GAN procedure encourages the reconstructions to move towards regions of the search space with high probability of containing photo-realistic images and thus closer to the natural image manifold.

SRGAN is a GAN based network, where the generator (G) learns to generates SR images from LR images as close as possible to HR. The discriminator (D) learns to distinguish generated SR images from real images. The G takes advantage of ResNet and sub-pixel convolution for upsampling. It also combines perceptual loss with generative or adversarial loss for the computation of its loss.


It mostly composes of convolution layers, batch normalization and parameterized ReLU (PRelU). The generator also implements skip connections similar to ResNet. The convolution layer with “k3n64s1” stands for 3x3 kernel filters outputting 64 channels with stride 1. SRGAN Architecture

Loss Function

The loss function for the generator composes of the content loss (reconstruction loss) and the adversarial loss. SRGAN Perceptual Loss Function


To train :

To test :

1. on your trained checkpoint model

  • Run the script python --image_path [PATH FOR YOUR IMAGE] --checkpoint_model [PATH FOR YOUR CHK MODEL]

2. on pretrained checkpoint models

  • Run the script python --image_path [PATH FOR YOUR IMAGE] --checkpoint_model pretrained_models/RRDB_ESRGAN_x4.pth
You can’t perform that action at this time.