DBNet: A Large-Scale Dataset for Driving Behavior Learning
Switch branches/tags
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data Update README.md Jul 20, 2018
docs DBNet data are released! Sep 25, 2018
models add more models Jul 13, 2018
tools Create README.md Jul 3, 2018
utils add more models Jul 13, 2018
.gitignore add more models Jul 12, 2018
LICENSE Create LICENSE Apr 26, 2018
README.md DBNet data are released! Sep 25, 2018
evaluate.py add more models Jul 13, 2018
predict.py add more models Jul 12, 2018
provider.py add more models Jul 13, 2018
train.py add more models Jul 13, 2018
train_demo.py add dbnet-2018 challenge Jul 3, 2018

README.md

db-prediction

DBNet is a large-scale driving behavior dataset, which provides large-scale high-quality point clouds scanned by Velodyne lasers, high-resolution videos recorded by dashboard cameras and standard drivers' behaviors (vehicle speed, steering angle) collected by real-time sensors.

Extensive experiments demonstrate that extra depth information helps networks to determine driving policies indeed. We hope it will become useful resources for the autonomous driving research community.

Created by Yiping Chen*, Jingkang Wang*, Jonathan Li, Cewu Lu, Zhipeng Luo, HanXue and Cheng Wang. (*equal contribution)

The resources of our work are available: [paper], [code], [video], [website], [challenge]

News!

DBNet Autonomous Driving Data (prepared & raw) are released here!

We are going to organize DBNet challenges for CVPR/ICCV/ECCV Workshops. The instructions of DBNet-2018 challenge will be open soon. Stay tuned!

Contents

  1. Introduction
  2. Requirements
  3. Quick Start
  4. Baseline
  5. Contributors
  6. Citation
  7. License

Introduction

This work is based on our research paper, which appears in CVPR 2018. We propose a large-scale dataset for driving behavior learning, namely, DBNet. You can also check our dataset webpage for a deeper introduction.

In this repository, we release demo code and partial prepared data for training with only images, as well as leveraging feature maps or point clouds. The prepared data are accessible here. (More demo models and scripts are released soon!)

Requirements

  • Tensorflow 1.2.0
  • Python 2.7
  • CUDA 8.0+ (For GPU)
  • Python Libraries: numpy, scipy and laspy

The code has been tested with Python 2.7, Tensorflow 1.2.0, CUDA 8.0 and cuDNN 5.1 on Ubuntu 14.04. But it may work on more machines (directly or through mini-modification), pull-requests or test report are well welcomed.

Quick Start

Training

To train a model to predict vehicle speeds and steering angles:

python train.py --model nvidia_pn --batch_size 16 --max_epoch 125 --gpu 0

The names of the models are consistent with our paper. Log files and network parameters will be saved to logs folder in default.

To see HELP for the training script:

python train.py -h

We can use TensorBoard to view the network architecture and monitor the training progress.

tensorboard --logdir logs

Evaluation

After training, you could evaluate the performance of models using evaluate.py. To plot the figures or calculate AUC, you may need to have matplotlib library installed.

python evaluate.py --model_path logs/nvidia_pn/model.ckpt

Prediction

To get the predictions of test data:

python predict.py

The results are saved in results/results (every segment) and results/behavior_pred.txt (merged) by default. To change the storation location:

python predict.py --result_dir specified_dir

The result directory will be created automatically if it doesn't exist.

Baseline

MethodSettingAccuracyAUCMEAEAME
nvidia-pnVideos + Laser Pointsangle70.65% (<5)0.7799 29.464.2320.88
speed82.21% (<3)0.870118.561.809.68

This baseline is run on dbnet-2018 challenge data and only nvidia_pn is tested. To measure difficult architectures comprehensively, several metrics are set, including accuracy under different thresholds, area under curve (AUC), max error (ME), mean error (AE) and mean of max errors (AME).

The implementations of these metrics could be found in evaluate.py.

Contributors

DBNet was developed by MVIG, Shanghai Jiao Tong University* and SCSC Lab, Xiamen University* (alphabetical order).

Citation

If you find our work useful in your research, please consider citing:

@InProceedings{DBNet2018,
  author = {Yiping Chen and Jingkang Wang and Jonathan Li and Cewu Lu and Zhipeng Luo and HanXue and Cheng Wang},
  title = {LiDAR-Video Driving Dataset: Learning Driving Policies Effectively},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2018}
}

License

Our code is released under Apache 2.0 License. The copyright of DBNet could be checked here.