This is the official repository for the paper Perceptual Deep Depth Super-Resolution. It contains trained MSG-V models for x4 and x8 super-resolution and IPython notebook with a usage example. It also contains the implementation of MSEv loss function and usage example.
To run the code you will need python3.7
and the packages from environment.yml
.
All of them can be installed via conda
with
conda env create -f environment.yml
Alternatively, you can build an Nvdia-Docker image with all required dependencies using the provided Dockerfile
:
git clone https://github.com/voyleg/perceptual-depth-sr
cd perceptual-depth-sr
docker build -t perceptual-depth-sr .
and run Jupyter in the container
nvidia-docker run --rm -it -p 8888:8888 --mount type=bind,source=$(pwd),target=/code perceptual-depth-sr bash -c 'cd /code && jupyter notebook --ip="*" --no-browser --allow-root'
@inproceedings{voynov2019perceptual,
title={Perceptual deep depth super-resolution},
author={Voynov, Oleg and Artemov, Alexey and Egiazarian, Vage and Notchenko, Alexander and Bobrovskikh, Gleb and Burnaev, Evgeny and Zorin, Denis},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
pages={5653--5663},
year={2019}
}