Anytime Stereo Image Depth Estimation on Mobile Devices
Clone or download
Latest commit a4bbbe4 Jan 5, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
dataloader upload Oct 31, 2018
docs Update index.html Jan 4, 2019
figures upload Oct 31, 2018
models upload Oct 31, 2018
utils upload Oct 31, 2018
LICENSE upload Oct 31, 2018
README.md Update README.md Jan 4, 2019
create_dataset.sh upload Oct 31, 2018
finetune.py upload Oct 31, 2018
main.py upload Oct 31, 2018

README.md

Anytime Stereo Image Depth Estimation on Mobile Devices

This repository contains the code (in PyTorch) for AnyNet introduced in the following paper

Anytime Stereo Image Depth Estimation on Mobile Devices

by Yan Wang∗, Zihang Lai∗, Gao Huang, Brian Wang, Laurens van der Maaten, Mark Campbell and Kilian Q. Weinberger

Figure

Citation

@article{wang2018anytime,
  title={Anytime Stereo Image Depth Estimation on Mobile Devices},
  author={Wang, Yan and Lai, Zihang and Huang, Gao and Wang, Brian H. and Van Der Maaten, Laurens and Campbell, Mark and Weinberger, Kilian Q},
  journal={arXiv preprint arXiv:1810.11408},
  year={2018}
}

Contents

  1. Introduction
  2. Usage
  3. Results
  4. Contacts

Introduction

Many real-world applications of stereo depth es- timation in robotics require the generation of disparity maps in real time on low power devices. Depth estimation should be accurate, e.g. for mapping the environment, and real-time, e.g. for obstacle avoidance. Current state-of-the-art algorithms can either generate accurate but slow, or fast but high-error mappings, and typically have far too many parameters for low-power/memory devices. Motivated by this shortcoming we propose a novel approach for disparity prediction in the anytime setting. In contrast to prior work, our end-to-end learned approach can trade off computation and accuracy at inference time. The depth estimation is performed in stages, during which the model can be queried at any time to output its current best estimate. In the first stage it processes a scaled down version of the input images to obtain an initial low resolution sketch of the disparity map. This sketch is then successively refined with higher resolution details until a full resolution, high quality disparity map emerges. Here, we leverage the fact that disparity refinements can be performed extremely fast as the residual error is bounded by only a few pixels. Our final model can process 1242×375 resolution images within a range of 10-35 FPS on an NVIDIA Jetson TX2 module with only marginal increases in error – using two orders of magnitude fewer parameters than the most competitive baseline.

Usage

  1. Install dependencies
  2. Generate the soft-links for the SceneFlow Dataset. You need to modify the scenflow_data_path to the actual SceneFlow path in create_dataset.sh file.
     sh ./create_dataset.sh
    
  3. Compile SPNet if SPN refinement is needed. (change NVCC path in make.sh when necessary)
    cd model/spn
    sh make.sh
    

Dependencies

Train

Firstly, we use the following command to pretrained AnyNet on Scene Flow

python main.py --maxdisp 192 --with_spn

Secondly, we use the following command to finetune AnyNet on KITTI 2015

python finetune.py --maxdisp 192 --with_spn --datapath path-to-kitti2015/training/

Results

Figure KITTI2012 Results