Skip to content

jhornauer/GrUMoDepth

Repository files navigation

Gradient-based Uncertainty for Monocular Depth Estimation

This repository contains the official implementation of our ECCV 2022 paper.

Overview

Requirements

We provide the environment.yml file with the required packages. The file ca be used to create an Anaconda environment.

Prepare monodepth2:

  1. Download the monodepth2 repository: Monodepth2
  2. Replace from kitti_utils import generate_depth_map with from ..kitti_utils import generate_depth_map in the file monodepth2/kitti_dataset.py in line 14.
  3. Replace from layers import * with from ..layers import * in the file monodeth2/networks/depth_decoder.py in line 14.
  4. Replace class MonodepthOptions: with class MonodepthOptions(object): in the file monodepth2/options.py in line 15.
  5. Add import sys and sys.path.append("monodepth2") to the file monodepth2/trainer.py before from utils.py import *.

Datasets

We conduct our evaluations on the datasets NYU Depth V2 and KITTI. NYU Depth V2 is downloaded as provided by FastDepth into the folder nyu_data. KITTI is downloaded according to the instructions from mono-uncertainty into the folder kitti_data.

Pre-trained Models

We conduct experiments on already trained depth estimation models. The pre-trained models are trained on KITTI with monocular and stereo supervision in a self-supervised manner and on NYU Depth V2 in a supervised manner. In case of KITTI, we rely on the already trained modes from mono-uncertainty. Please follow their instructions to download the respective model weights. Our models trained on NYU Depth V2 can be downloaded from the following link: NYU Models. The models can be trained with the following command:

python3 train_supervised.py --data_path nyu_data --width 288 --height 224 --max_depth 10 --dataset nyu  

To train the log-likelihood maximization model use the additional option --uncert. To train the MC Dropout model use the additional option --dropout.

Run Code

We conduct our experiments on KITTI (self-supervised) and NYU Depth V2 (supervised) data.

As explained in our paper, we apply our training-free uncertainty estimation method to already trained models. Therefore, we have different base models and compare the uncertainty estimation approaches on the base models.

Evaluation Self-Supervised

In the self-supervised case, we have the base models MC Dropout (Drop), Bootstrapped Ensembles (Boot), Post-Processing (Post), Log-likelihood Maximization (Log) and Self-Teaching (Self). We compare the post hoc uncertainty estimation approaches on Post, Log and Self. As post hoc uncertainty estimation approaches we consider the variance over different test-time augmentations (Var), inference-only dropout (In-Drop) and our approach.

For the evaluation of the Drop model and the Boot model as well as the Log and the Self base models you can refer to mono-uncertainty.

Evaluation of our gradient-based uncertainty estimation method:

For the evaluation of our method on the Post base model run:

python3 generate_maps.py --data_path kitti_data --load_weights_folder weights/S/Monodepth2-Post/models/weights_19/ --eval_split eigen_benchmark --eval_stereo --output_dir experiments/S/post_model/Grad --grad
python3 evaluate.py --ext_disp_to_eval experiments/S/post_model/Grad/raw/ --eval_stereo --max_depth 80 --eval_split eigen_benchmark --eval_uncert --output_dir experiments/S/post_model/Grad --grad

For the evaluation of our method on the Log base model run:

python3 generate_maps.py --data_path kitti_data --load_weights_folder weights/S/Monodepth2-Log/models/weights_19/ --eval_split eigen_benchmark --eval_stereo --output_dir experiments/S/log_model/Grad --uncert --grad --w 2.0 
python3 evaluate.py --ext_disp_to_eval experiments/S/log_model/Grad/raw/ --eval_stereo --max_depth 80 --eval_split eigen_benchmark --eval_uncert --output_dir experiments/S/log_model/Grad --grad --uncert --w 2.0

For the evaluation of our method on the Self base model run:

python3 generate_maps.py --data_path kitti_data --load_weights_folder weights/S/Monodepth2-Self/models/weights_19/ --eval_split eigen_benchmark --eval_stereo --output_dir experiments/S/self_model/Grad --uncert --grad --w 2.0 
python3 evaluate.py --ext_disp_to_eval experiments/S/self_model/Grad/raw/ --eval_stereo --max_depth 80 --eval_split eigen_benchmark --eval_uncert --output_dir experiments/S/self_model/Grad --grad --uncert --w 2.0

To change the decoder layer for the gradient extraction use the argument --ext_layer with values between 0 and 10.

To change the augmentation for the generation of the reference depth use the argument --gref with one of the values: flip, gray , noise or rot.

Evaluation of In-Drop method:

For the evaluation of the In-Drop method on the Post base model run:

python3 generate_maps.py --data_path kitti_data --load_weights_folder weights/S/Monodepth2-Post/models/weights_19/ --eval_split eigen_benchmark --eval_stereo --output_dir experiments/S/post_model/In-Drop --infer_dropout
python3 evaluate.py --ext_disp_to_eval experiments/S/post_model/Infer-Drop/raw/ --eval_stereo --max_depth 80 --eval_split eigen_benchmark --eval_uncert --output_dir experiments/S/post_model/In-Drop --infer_dropout

To change the dropout probability use the argument --infer_p with values between 0.0 and 1.0. Default ist 0.2.

Evaluation of the Var method:

For the evaluation of the Var method on the Post base model run:

python3 generate_maps.py --data_path kitti_data --load_weights_folder weights/S/Monodepth2-Post/models/weights_19/ --eval_split eigen_benchmark --eval_stereo --output_dir experiments/S/post_model/Var --var_aug
python3 evaluate.py --ext_disp_to_eval experiments/S/post_model/Var/raw/ --eval_stereo --max_depth 80 --eval_split eigen_benchmark --eval_uncert --output_dir experiments/S/post_model/Var --var_aug

For the evaluation of the models trained with monocular supervision replace the folder S with M and the argument --eval_stereo with --eval_mono.

Evaluation Supervised

In the supervised case, we have the base models MC Dropout (Drop), Post-Processing (Post) and Log-likelihood Maximization (Log). We compare the post hoc uncertainty estimation approaches on Post and Log. As post hoc uncertainty estimation approaches we consider the variance over different test-time augmentations (Var), inference-only dropout (In-Drop) and our approach.

Evaluation of our gradient-based uncertainty estimation method:

For the evaluation of our method on the Post base model run:

python3 evaluate_supervised.py --max_depth 10 --load_weights_folder weights/NYU/Monodepth2/weights/ --data_path nyu_data --eval_uncert --output_dir experiments/NYU/post_model/Grad/ --grad 

For the evaluation of our method on the Log base model run:

python3 evaluate_supervised.py --max_depth 10 --load_weights_folder weights/NYU/Monodepth2-Log/weights/ --data_path nyu_data --eval_uncert --output_dir experiments/NYU/log_model/Grad/ --grad --uncert -w 2.0 
Evaluation of the Drop model:

For the evaluation of the Drop model run:

python3 evaluate_supervised.py --max_depth 10 --load_weights_folder weights/NYU/Monodepth2-Drop/weights/ --data_path nyu_data --eval_uncert --output_dir experiments/NYU/drop_model/Drop/ --dropout
Evaluation of In-Drop method:

For the evaluation of the In-Drop method on the Post base model run:

python3 evaluate_supervised.py --max_depth 10 --load_weights_folder weights/NYU/Monodepth2/weights/ --data_path nyu_data --eval_uncert --output_dir experiments/NYU/post_model/In-Drop/ --infer_dropout

To change the dropout probability use the argument --infer_p with values between 0.0 and 1.0. Default ist 0.2.

Evaluation of the Var method:

For the evaluation of the Var method on the Post base model run:

python3 evaluate_supervised.py --max_depth 10 --load_weights_folder weights/NYU/Monodepth2/weights/ --data_path nyu_data --eval_uncert --output_dir experiments/NYU/post_model/Var/ --var_aug

Reference

Please use the following citations when referencing our work:

Gradient-based Uncertainty for Monocular Depth Estimation
Julia Hornauer and Vasileios Belagiannis [paper]

@inproceedings{Hornauer2022GradientbasedUF,
  title={Gradient-based Uncertainty for Monocular Depth Estimation},
  author={Julia Hornauer and Vasileios Belagiannis},
  booktitle={European Conference on Computer Vision},
  year={2022}
}

Acknowledgement

We used and modified code parts from the open source projects monodepth2 and mono-uncertainty. We like to thank the authors for making their code publicly available.

About

Gradient-based Uncertainty for Monocular Depth Estimation (ECCV 2022)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages