Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


This repo contains the code and data for Pixel-Accurate Depth Evaluation in Realistic Driving Scenarios by Tobias Gruber, Mario Bijelic, Felix Heide, Werner Ritter and Klaus Dietmayer.

NEWS: Coda and data are available now!


This work presents an evaluation benchmark for depth estimation and completion using high-resolution depth measurements with angular resolution of up to 25" (arcsecond), akin to a 50 megapixel camera with per-pixel depth available. Existing datasets, such as the KITTI benchmark, provide only sparse reference measurements with an order of magnitude lower angular resolution - these sparse measurements are treated as ground truth by existing depth estimation methods. We propose an evaluation in four characteristic automotive scenarios recorded in varying weather conditions (day, night, fog, rain). As a result, our benchmark allows to evaluate the robustness of depth sensing methods to adverse weather and different driving conditions. Using the proposed evaluation data, we show that current stereo approaches provide significantly more stable depth estimates than monocular methods and lidar completion in adverse weather.

Dataset overview

Some results

Getting started

Clone the benchmark code.

git clone
cd PixelAccurateDepthBenchmark

For running the evaluation and visualization code you need Python with the following packages:

  • numpy
  • cv2
  • matplotlib
  • scipy

We provide a conda environment to run our code.

conda env create -f environment.yaml

Activate conda environment.

conda activate PixelAccurateDepthBenchmark

Download the benchmark data from the DENSE dataset webpage.

Check if you have downloaded all files. Then, you can unzip your downloaded files.

bash scripts/ <your_download_folder>

We provide the results of our paper by running

python src/ --approach lidar_hdl64_rgb_left
python src/ --approach sgm
python src/ --approach monodepth
python src/ --approach sparse2dense

All visualizations including reference RGB, lidar and high-resolution ground truth can be generated by running

python src/ --approach lidar_hdl64_rgb_left
python src/ --approach sgm
python src/ --approach monodepth
python src/ --approach sparse2dense

Apply your algorithm on the benchmark data set and save your depth as npz-File in a folder <your_approach>.

import numpy as np

Run the quantitative evaluation.

python src/ --approach <your_approach>

Get qualitative visualizations.

python src/ --approach <your_approach>

If any special loading of the data is required, adapt the load_depth() function in

Sensor setup

Load RGB image

The RGB image is already debayered, rectified and transformed to 8bit and can be loaded with:

import cv2

Load lidar

The lidar point clouds are already projected into the RGB camera frame and can be loaded with:

import numpy as np

Load high-resolution ground truth

The high-resolution ground truth is provided for each scenario and can be loaded with:

import numpy as np

Model finetuning

See Gated2Depth for finetuning your models to our sensor setup.


If you find our work on benchmarking depth algorithms useful in your research, please consider citing our paper:

  title     = {Pixel-Accurate Depth Evaluation in Realistic Driving Scenarios},
  author    = {Gruber, Tobias and Bijelic, Mario and Heide, Felix and Ritter, Werner and Dietmayer, Klaus},
  booktitle = {International Conference on 3D Vision (3DV)},
  year      = {2019}


This work has received funding from the European Union under the H2020 ECSEL Programme as part of the DENSE project, contract number 692449.


No description, website, or topics provided.




No releases published


No packages published