Skip to content
DeepSUM: Deep neural network for Super-resolution of Unregistered Multitemporal images (ESA PROBA-V challenge)
Jupyter Notebook Python
Branch: master
Clone or download
Diego Valsesia
Diego Valsesia cleanup
Latest commit 5194308 Jul 23, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
DeepSUM cleanup Jul 23, 2019
dataset_creation Added code and checkpoints Jul 23, 2019
libraries Added code and checkpoints Jul 23, 2019 Update Jul 23, 2019
requirements.txt Added code and checkpoints Jul 23, 2019

DeepSUM: Deep neural network for Super-resolution of Unregistered Multitemporal images

DeepSUM is a novel Multi Image Super-Resolution (MISR) deep neural network that exploits both spatial and temporal correlations to recover a single high resolution image from multiple unregistered low resolution images.

This repository contains python/tensorflow implementation of DeepSUM, trained and tested on the PROBA-V dataset provided by ESA’s Advanced Concepts Team in the context of the European Space Agency's Kelvin competition.

DeepSUM is the winner of the PROBA-V SR challenge.

BibTex reference:

       author = {{Bordone Molini}, Andrea and {Valsesia}, Diego and {Fracastoro}, Giulia and
         {Magli}, Enrico},
        title = "{DeepSUM: Deep neural network for Super-resolution of Unregistered Multitemporal images}",
      journal = {arXiv e-prints},
     keywords = {Electrical Engineering and Systems Science - Image and Video Processing, Computer Science - Machine Learning},
         year = "2019",
        month = "Jul",
          eid = {arXiv:1907.06490},
        pages = {arXiv:1907.06490},
archivePrefix = {arXiv},
       eprint = {1907.06490},
 primaryClass = {eess.IV}

Setup to get started

Make sure you have Python3 and all the required python packages installed:

pip install -r requirements.txt

Load data from Kelvin Competition and create the training set and the validation set

  • Download the PROBA-V dataset from the Kelvin Competition and save it under ./dataset_creation/probav_data
  • Load the dataset from the directories and save it to pickles by running Save_dataset_pickles.ipynb notebook
  • Run the Create_dataset.ipynb notebook to create training dataset and validation dataset for both bands NIR and RED
  • To save RAM memory we advise to extract the best 9 images based on the masks: run Save_best9_from_dataset.ipynb notebook after Create_dataset.ipynb. Based on the dataset you want to use (full or best 9) change the 'full' parameter in the config file.


In config_files/ you can place your configuration before starting training the model:

"lr" : learning rate
"batch_size" batch size
"skip_step": validation frequency,
"dataset_path": directory with training set and validation set created by means of Create_dataset.ipynb,
"n_chunks": number of pickles in which the training set is divided,
"channels": number of channels of input images,
"T_in": number of images per scene,
"R": upscale factor,
"full": use the full dataset with all images or the best 9 for each imageset,
"patch_size_HR": size of input images,
"border": border size to take into account shifts in the loss and psnr computation,
"spectral_band": NIR or RED,
"RegNet_pretrain_dir": directory with RegNet pretraining checkpoint,
"SISRNet_pretrain_dir": directory with SISRNet pretraining checkpoint,

Run DeepSUM_train.ipynb to train a MISR model on the training dataset just generated. If tensorboard_dir directory is found in checkpoints/, the training will start from the latest checkpoint, otherwise the RegNet and SISRNet weights will be initialized from the checkpoints contained in the pretraining_checkpoints/ directory. These weights come from the pretraining procedure explained in DeepSUM paper.

Challenge checkpoints

The DeepSUM has been trained for both NIR and RED bands. In the 'checkpoints' directory there are the final weights used to produce the superresolved test images for the final ESA challenge submission.




During training, only the best 9 images for each imageset are considered for the score. After the training procedure is completed, you can compute a final evaluation on the validation set by also exploiting the other images available in each imageset. To do so, run Sliding_window_evaluation.ipynb.


  • Run the Create_testset.ipynb notebook under dataset_creation/ to create the dataset with the test LR images
  • To test the trained model on new LR images and get the corresponding superresolved images run DeepSUM_superresolve_testdata.ipynb.

Authors & Contacts

DeepSUM is based on work by team SuperPip from the Image Processing and Learning group of Politecnico di Torino: Andrea Bordone Molini (andrea.bordone AT, Diego Valsesia (diego.valsesia AT, Giulia Fracastoro (giulia.fracastoro AT, Enrico Magli (enrico.magli AT

You can’t perform that action at this time.