Skip to content
/ DAV Public

[ECCV 2020] Guiding Monocular Depth Estimation Using Depth Attention-Volume

License

Notifications You must be signed in to change notification settings

HuynhLam/DAV

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Guiding Monocular Depth Estimation Using Depth Attention-Volume


This repository contains the PyTorch implementation of our ECCV 2020 paper:

Guiding Monocular Depth Estimation Using Depth Attention-Volume
Lam Huynh, Phong Nguyen-Ha, Jiří Matas, Esa Rahtu, Janne Heikkilä
University of Oulu, Tampere University, Czech Technical University in Prague

| Project page | Arxiv | Demo video |

Abstract

Recovering the scene depth from a single image is an ill-posed problem that requires additional priors, often referred to as monocular depth cues, to disambiguate different 3D interpretations. In recent works, those priors have been learned in an end-to-end manner from large datasets by using deep neural networks. In this paper, we propose guiding depth estimation to favor planar structures that are ubiquitous especially in indoor environments. This is achieved by incorporating a non-local coplanarity constraint to the network with a novel attention mechanism called depth-attention volume (DAV). Experiments on two popular indoor datasets, namely NYU-Depth-v2 and ScanNet, show that our method achieves state-of-the-art depth estimation results while using only a fraction of the number of parameters needed by the competing methods.

Network architecture

The pipeline of our proposed network. An image is passed through the encoder,then the non-local depth-attention module, and finally the decoder to produce the estimated depth map. The model is trained using L_attention and L_depth losses, which are described in the paper.

Overview architecture

Details structure of the depth attention module.

Depth attention module

Results

Evaluation on NYU-v2 test set.

Depth attention module

Comparison between number of parameters and model performance.

Depth attention module

Qualitative results on NYU.

Depth attention module

Cross-dataset evaluation on SUN-RGBD.

Depth attention module

Result video on unseen data from the real world:

Unseen data

About

[ECCV 2020] Guiding Monocular Depth Estimation Using Depth Attention-Volume

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published