Skip to content
Code for robust monocular depth estimation described in "Lasinger et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer, arXiv:1907.01341"
Branch: master
Clone or download
Latest commit 0c00f02 Jul 9, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
input initial commit Jun 24, 2019
output initial commit Jun 24, 2019
.gitignore Initial commit Jun 24, 2019
LICENSE Initial commit Jun 24, 2019
README.md Minor Jul 9, 2019
monodepth_net.py initial commit Jun 24, 2019
run.py Update README, remove spurious dependencies Jul 9, 2019
utils.py Remove debug statement Jul 9, 2019

README.md

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer

This repository contains code to compute depth from a single image. It accompanies our paper:

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer
Katrin Lasinger, Rene Ranftl, Konrad Schindler, Vladlen Koltun

The pre-trained model corresponds to RW+MD+MV with MGDA enabled and movies sampled at 4 frames per second.

Setup

  1. Download the model weights model.pt and place the file in the root folder.

  2. Setup dependencies:

    conda install pytorch torchvision opencv

    The code was tested with Python 3.7, PyTorch 1.0.1, and OpenCV 3.4.2.

Usage

  1. Place one or more input images in the folder input.

  2. Run the model:

    python run.py
  3. The resulting depth maps are written to the output folder.

Citation

Please cite our paper if you use this code in your research:

@article{Lasinger2019,
	author    = {Katrin Lasinger and Ren\'{e} Ranftl and Konrad Schindler and Vladlen Koltun},
	title     = {Towards Robust Monocular Depth Estimation: Mixing Datasets for 
        Zero-Shot Cross-Dataset Transfer},
	journal   = {arXiv:1907.01341},
	year      = {2019},
}

License

MIT License

You can’t perform that action at this time.