Skip to content

An unsupervised learning framework for depth and ego-motion estimation from monocular videos

License

Notifications You must be signed in to change notification settings

SJWang2015/SfMLearner

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SfMLearner

This codebase (in progress) implements the system described in the paper:

Unsupervised Learning of Depth and Ego-Motion from Video

Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe

In CVPR 2017 (Oral).

See the project webpage for more details. Please contact Tinghui Zhou (tinghuiz@berkeley.edu) if you have any questions.

Prerequisites

This codebase was developed and tested with Tensorflow 1.0, CUDA 8.0 and Ubuntu 16.04.

Running the single-view depth demo

We provide the demo code for running our single-view depth prediction model. First, download the pre-trained model by running the following

bash ./models/download_model.sh

Then you can use the provided ipython-notebook demo.ipynb to run the demo.

TODO List (after NIPS deadline)

  • Full training code for Cityscapes and KITTI.
  • Evaluation code for the KITTI experiments.

Disclaimer

This is the authors' implementation of the system described in the paper and not an official Google product.

About

An unsupervised learning framework for depth and ego-motion estimation from monocular videos

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 93.3%
  • Python 6.6%
  • Shell 0.1%