Skip to content
Automatic Large-Scale Data Acquisition via Crowdsourcing for Crosswalk Classification: A Deep Learning Approach
Jupyter Notebook
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
datasets Release of the datasets and models Dec 22, 2017
images
inference
models
samples Release of the datasets and models Dec 22, 2017
README.md Release of the datasets and models Dec 22, 2017

README.md

Automatic Large-Scale Data Acquisition via Crowdsourcing for Crosswalk Classification: A Deep Learning Approach

Rodrigo F. Berriel, Franco Schmidt Rossi, Alberto F. de Souza, and Thiago Oliveira-Santos

Computers & Graphics: 10.1016/J.CAG.2017.08.004

Overview

Correctly identifying crosswalks is an essential task for the driving activity and mobility autonomy. Many crosswalk classification, detection and localization systems have been proposed in the literature over the years. These systems use different perspectives to tackle the crosswalk classification problem: satellite imagery, cockpit view (from the top of a car or behind the windshield), and pedestrian perspective. Most of the works in the literature are designed and evaluated using small and local datasets, i.e. datasets that present low diversity. Scaling to large datasets imposes a challenge for the annotation procedure. Moreover, there is still need for cross-database experiments in the literature because it is usually hard to collect the data in the same place and conditions of the final application. In this paper, we present a crosswalk classification system based on deep learning. For that, crowdsourcing platforms, such as OpenStreetMap and Google Street View, are exploited to enable automatic training via automatic acquisition and annotation of a large-scale database. Additionally, this work proposes a comparison study of models trained using fully-automatic data acquisition and annotation against models that were partially annotated. Cross-database experiments were also included in the experimentation to show that the proposed methods enable use with real world applications. Our results show that the model trained on the fully-automatic database achieved high overall accuracy (94.12%), and that a statistically significant improvement (to 96.30%) can be achieved by manually annotating a specific part of the database. Finally, the results of the cross-database experiments show that both models are robust to the many variations of image and scenarios, presenting a consistent behavior.


Test with your own data

Pre-trained models are available here.

This Python notebook may help you with the inference process.

Datasets

We are releasing three datasets: IARA (day and night) and GOPRO. The IARA datasets were recorded using the infrastructure of the IARA autonomous car (which we are developing in our lab), and the GOPRO dataset was recorded using a GOPRO camera attached to the windshield (close to the rear mirror). Altogether, these three datasets have more than 35,000 annotated images. More details can be found in the paper. Below, you can see some samples of the datasets and how to download each of them.

IARA dataset

Daylight

Details and download here.

SamplesIARAdaylight

Results on the IARA dataset: video.

ResultsIARA

Night

Details and download here.

SamplesIARAnight

Results on the NIGHT dataset: video.

ResultsNIGHT

GOPRO dataset

Details and download here.

SamplesGOPRO

Results on the GOPRO dataset: video.

ResultsGOPRO

BibTeX

If you find any of the datasets and/or our pre-trained models useful for your research, please cite the paper below.

@article{berriel2017cag,
    Author  = {Rodrigo F. Berriel and Franco Schmidt Rossi and Alberto F. de Souza and Thiago Oliveira-Santos},
    Title   = {{Automatic Large-Scale Data Acquisition via Crowdsourcing for Crosswalk Classification: A Deep Learning Approach}},
    Journal = {Computers \& Graphics},
    Year    = {2017},
    Volume  = {68},
    Pages   = {32--42},
    DOI     = {10.1016/J.CAG.2017.08.004},
    ISSN    = {0097-8493},
}

If any link is broken, feel free to send me an email or open an issue.

You can’t perform that action at this time.