Skip to content
A set of Python scripts to evaluate the Automotive Datasets provided by Prophesee
Python Jupyter Notebook
Branch: master
Clone or download
dmigliore Merge pull request #1 from prophesee-ai/dEcmir-patch-1
fix the visualization in cases there are no events
Latest commit 643dbbc Feb 6, 2020

README.md

Prophesee's Automotive Dataset Toolbox

Prophesee Automotive Dataset

This repository contains a set of Python scripts to evaluate the Automotive Datasets provided by Prophesee.

Requirements

The scripts can be launched with Python 2.x or Python 3.x:

  • io requires NumPy
  • visualize requires also OpenCV with python bindings.

You can install all the dependencies using pip:

pip install numpy
pip install opencv-python

Get the data

Go to the dataset presentation page and download the dataset (200G compressed and 750G uncompressed !).

The dataset is split into 10 archive files that can be independently used (2 for testing and validation sets each and six for training set) Each archive contains up to 500 files and their annotations.

Unzip using 7zip.

If you use the dataset, please cite the article "A Large Scale Event-based Detection Dataset for Automotive" by P. de Tournemire, D. Nitti, E. Perot, D. Migliore and A. Sironi

Visualization

To view a few files and their annotation just use python3 dataset_visualization.py file_1_td.dat file_2_td.dat ... file_n_dat And it will display those events video in a grid. You can use it with any number of files, but a large number of them will make the display slow!

Reading files in python

There is a convenience class to read files that works both for the event .dat files and their annotations. A small tutorial can be found here

Running a baseline

Now you can start by running a baseline either by looking into the last results in event-based literature or by leveraging the e2vid project of the University of Zurich's Robotic and Perception Group to run a frame-based detection algorithm!

Evaluation using the COCO API

If you install the API from COCO you can use the provided helper function in metrics to get mean average precision metrics. This is an usage example if you saved your detection results in the same format as the Ground Truth:

import numpy as np
from src.metrics.coco_eval import evaluate_detection

RESULT_FILE_PATHS = ["file1_results_bbox.npy", "file2_results_bbox.npy"]
GT_FILE_PATHS = ["file1_bbox.npy", "file2_bbox.npy"]

result_boxes_list = [np.load(p) for p in RESULT_FILE_PATHS]
gt_boxes_list = [np.load(p) for p in GT_FILE_PATHS]

evaluate_detection(gt_boxes_list, result_boxes_list)

Contacts

The code is open to contributions, so do not hesitate to ask questions, propose pull requests or create bug reports. For any other information or inquiries, contact us here

You can’t perform that action at this time.