Skip to content
High Quality Monocular Depth Estimation via Transfer Learning
Jupyter Notebook Python
Branch: master
Clone or download
Latest commit 635ad80 Oct 13, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
PyTorch 'async' is a reserved word in Python >= 3.7 May 4, 2019
Tensorflow TensorFlow 2.0 experimental implementation Jul 8, 2019
examples Testing code added. Tested on NYU Depth V2. Jan 2, 2019
.gitignore Fix GPU bug + always use reduced decoder Mar 14, 2019
DenseDepth.ipynb Created using Colaboratory Sep 4, 2019
LICENSE Create LICENSE Jan 3, 2019
README.md Update README.md Oct 13, 2019
augment.py Before testing training. Jan 3, 2019
callbacks.py Evaluation fix. Mar 5, 2019
data.py Minor fix. Mar 4, 2019
demo.py Added live point cloud demo. May 9, 2019
demo_depth.npy Added live point cloud demo. May 9, 2019
demo_rgb.npy Added live point cloud demo. May 9, 2019
evaluate.py
layers.py Testing code added. Tested on NYU Depth V2. Jan 2, 2019
loss.py Testing code added. Tested on NYU Depth V2. Jan 2, 2019
model.py Fix GPU bug + always use reduced decoder Mar 14, 2019
test.py Update test.py Sep 4, 2019
train.py Fix GPU bug + always use reduced decoder Mar 14, 2019
utils.py Update utils.py Oct 13, 2019

README.md

High Quality Monocular Depth Estimation via Transfer Learning (arXiv 2018)

Ibraheem Alhashim and Peter Wonka

Offical Keras (TensorFlow) implementaiton. If you have any questions or need more help with the code, feel free to contact the first author.

[Update] Added a Colab notebook to try the method on the fly.

[Update] Experimental TensorFlow 2.0 implementation added.

[Update] Experimental PyTorch code added.

Results

  • KITTI

KITTI

  • NYU Depth V2

NYU Depth v2 NYU Depth v2 table

Requirements

  • This code is tested with Keras 2.2.4, Tensorflow 1.13, CUDA 9.0, on a machine with an NVIDIA Titan V and 16GB+ RAM running on Windows 10 or Ubuntu 16.
  • Other packages needed keras pillow matplotlib scikit-learn scikit-image opencv-python pydot and GraphViz for the model graph visualization and PyGLM PySide2 pyopengl for the GUI demo.
  • Minimum hardware tested on for inference NVIDIA GeForce 940MX (laptop) / NVIDIA GeForce GTX 950 (desktop).
  • Training takes about 24 hours on a single NVIDIA TITAN RTX with batch size 8.

Pre-trained Models

Demos

  • After downloading the pre-trained model (nyu.h5), run python test.py. You should see a montage of images with their estimated depth maps.
  • [Update] A Qt demo showing 3D point clouds from the webcam or an image. Simply run python demo.py. It requires the packages PyGLM PySide2 pyopengl.

RGBD Demo

Data

  • NYU Depth V2 (50K) (4.1 GB): You don't need to extract the dataset since the code loads the entire zip file into memory when training.
  • KITTI: copy the raw data to a folder with the path '../kitti'. Our method expects dense input depth maps, therefore, you need to run a depth inpainting method on the Lidar data. For our experiments, we used our Python re-implmentaiton of the Matlab code provided with NYU Depth V2 toolbox. The entire 80K images took 2 hours on an 80 nodes cluster for inpainting. For our training, we used the subset defined here.
  • Unreal-1k: coming soon.

Training

  • Run python train.py --data nyu --gpus 4 --bs 8.

Evaluation

  • Download, but don't extract, the ground truth test data from here (1.4 GB). Then simply run python evaluate.py.

Reference

Corresponding paper to cite:

@article{Alhashim2018,
  author    = {Ibraheem Alhashim and Peter Wonka},
  title     = {High Quality Monocular Depth Estimation via Transfer Learning},
  journal   = {arXiv e-prints},
  volume    = {abs/1812.11941},
  year      = {2018},
  url       = {https://arxiv.org/abs/1812.11941},
  eid       = {arXiv:1812.11941},
  eprint    = {1812.11941}
}
You can’t perform that action at this time.