Implementation of the FSRCNN made by Dong et al. in this article. The names of the different Networks are displayed with the name of the dataset that they were trained with.
This network was trained with MNIST dataset, so the super-resolution works only with handwritten numbers.
This network uses a general purpose dataset. It only has 200 images for training and 100 for the test. So the result could be better.
This network uses the same dataset as before, but using Data Augmentation.
This network uses a 31k images dataset. It has an input shape of (150,150,3) and an output shape of (300,300,3). The results using this network are the best of all project.
This network uses the same dataset as the previous one, but with some changes in the complexity of the Network that increases the results for doing an upscaling of x4.
The images shown at the beginning are done using this network.
Following the structure described in the article of Chao Dong et al:
We add some modifications:
- Because we are training colour images we need 3 channels in the input (except in the MNIST).
- We use the Adam optimizer instead of the SGD (Stochastic Gradient Descent).
- In Flickr x4 we add some network complexity by turning the m=4 into m=7.
CHOLLET FRANÇOIS, Deep Learning with Python, 2018.
CHAO DONG, CHEN CHANGE LOY, XIAOOU TANG Accelerating the Super-Resolution Convolutional Neural Network, 2016. https://arxiv.org/pdf/1412.6980.pdf
P. KINGMA DIEDERIK, LEI BA JIMMY, Adam: A method for stochastic optimization, 2014. https://arxiv.org/pdf/1412.6980.pdf
CHAO DONG, CHEN CHANGE LOY, KAIMING HE, XIAOOU TANG, Image Super-Resolution Using Deep Convolutional Networks, 2015. https://arxiv.org/pdf/1501.00092.pdf
GOODFELLOW IAN, BENGIO YOSHUA, COURVILLE AARON, Deep Learning
NIELSEN MICHAEL, Neural Networks and Deep Learning, 2019.