Skip to content

Automatic Traffic Sign Damage Assessment using End-To-End Deep Neural Networks

License

Notifications You must be signed in to change notification settings

dsphamgithub/tsda

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

End-to-End Traffic Sign Damage Assessment

By Kristian Radoš, Jack Downes, Duc-Son Pham, and Aneesh Krishna

Model training information and frozen copy of codebase from a paper submitted for publication in The International Conference on Digital Image Computing: Techniques and Applications (DICTA) 2022 Sydney.

Abstract

Traffic sign damage monitoring is a practical issue facing large operations all over the world. Despite the scale of traffic sign damage and its consequent impact on public safety, damage audits are performed manually. By automating components of damage assessment we can greatly improve the effectiveness and efficiency of the process and in doing so alleviate its negative impact on traffic safety. In this paper, traffic sign damage assessment is explored as a computer vision problem approached with deep learning. We specifically focus on occlusion-type damages that hinder sign legibility. This paper makes several contributions. Firstly, it provides a comprehensive survey of related work on this problem. Secondly, it provides an extension to the generation of synthetic images for such a study. Most importantly, it proposes an extension of the EfficientDet object detection framework to address the challenge. It is shown that synthetic images can be successfully used to train an object detector variant to assess the level of damage, as measured between 0.0 and 1.0, in traffic signs. The extended framework achieves a damage assessment root mean squared error (RMSE) of 0.087 on a synthetic test set while maintaining a mean average precision (mAP) of 86.3% on the typical sign detection task.

BibTex

@inproceedings{rados2022end,
  title={End-to-End Traffic Sign Damage Assessment},
  author={Radoš, Kristian and Downes, Jack and Pham, Duc-Son and Krishna, Aneesh},
  booktitle={2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA)},
  year={2022},
  doi={10.1109/DICTA56598.2022.10034587}
}

EfficientDet Experiments

The code for the EfficientDet models is divided into two directories, EfficientDet_No_TSDA as used for the standard sign detection experiments and EfficientDet_TSDA, the model that has been modified to perform Traffic Sign Damage Assessment (TSDA).

See the README in each model directory for experiment training configurations and evaluation results.

All experiments were trained and evaluated on the Pawsey Supercomputer using Singularity containers for reproducibility. The contents of the .def file are below and the fully built container is available here.

Bootstrap: docker
From: nvcr.io/nvidia/tensorflow:21.05-tf2-py3
Stage: build
%post
    pip install 'lxml>=4.6.1'
    pip install 'pandas'
    pip install 'absl-py>=0.10.0'
    pip install 'matplotlib>=3.0.3'
    pip install 'numpy>=1.19.4'
    pip install 'Pillow>=6.0.0'
    pip install 'PyYAML>=5.1'
    pip install 'six>=1.15.0'
    pip install 'tensorflow==2.4.0'
    pip install 'tensorflow-addons>=0.12'
    pip install 'tensorflow-hub>=0.11'
    pip install 'neural-structured-learning>=1.3.1'
    pip install 'tensorflow-model-optimization>=0.5'
    pip install 'Cython>=0.29.13'
    pip install 'pycocotools==2.0.3'
    pip install 'opencv-python'
    pip install 'scikit-image'
    pip install 'imutils'
    pip install 'plotly'
    pip install 'tqdm'
    pip install 'wandb'

Synthetic Dataset Generation

[download] | The complete collection of datasets used for the above EfficientDet experiments.

The test set directories are equivalent to the original GTDSB test set. The synthetic images used for the _extended datasets were taken from the same pool of synthetic images. The 12000_synth_test synthetic images were generated using the same set of templates and backgrounds as the _extended datasets, but they share no images, i.e. they are independent.

Traffic Sign Templates

[download] | GTSDB classes matched using German Wikipedia and Wikimedia Commons images. Covers 43/43 classes.

Backgrounds

[download] | 1191 images taken from various sources with no visible traffic sign faces.

A set of 1,191 traffic scene backgrounds gathered from 4 sources were used to generate the synthetic dataset. All were filtered so at to contain no unlabelled real traffic signs. The different sources are described under the below headings.

Google Street View

925 images from countries surrounding Germany pulled from the Google Street View API using Hugo van Kemenade's random-street-view. The breakdown by country is as follows:

Code Country Images
AUT Austria 100
BEL Belgium 100
CHE Switzerland 100
CZE Czechia 100
DEU Germany 25
DNK Denmark 100
FRA France 100
GBR United Kingdom 50
LUX Luxembourg 50
NLD Netherlands 100
POL Poland 100

Cityscapes

191 images from Germany were taken from the Cityscapes dataset. They were chosen by automatically filtering out all images containing traffic signs using the ground truth labels provided with the dataset. The code used to do so can be found in cityscapes_backgrounds.py.

Geograph

50 images from the UK were manually picked out from www.geograph.org.uk. The webpage for each image can be found by searching on the website using the ID in its filename. 48 images were photographed by David Howard and 2 were photographed by Peter Wood. All credit goes to them.

© Copyright David Howard and licensed for reuse under creativecommons.org/licenses/by-sa/2.0

© Copyright Peter Wood and licensed for reuse under creativecommons.org/licenses/by-sa/2.0

Google Images

25 images, primarily from the UK and Germany, were found using Google Images. Google reverse image search can be used to find the original sources.

To-Do List

Excluding what was discussed in the Future Work section of the paper, here are some outstanding tasks yet to be completed. Commits to this repository will be made once addressed.

  • Experiment using an additional EfficientDet class network damage layer instead of a damage network.

  • Fix TSDA model with num_damage_sectors=1 ($m=1$), currently only partially implemented and still has errors.

  • MMDetection and standalone Keras implementations of a YOLOv3 TSDA model have been partially implemented.

About

Automatic Traffic Sign Damage Assessment using End-To-End Deep Neural Networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published