Skip to content

sovit-123/Traffic-Light-Detection-Using-YOLOv3

Repository files navigation

Real Time Traffic Light Detection using Deep Learning (YOLOv3)

Table of Contents

About

This project aims to detect traffic light in real time using deep learning as a part of autonomous driving technology.

Prediction Video

Progress and TODO

  • Implementation for all the traffic light types are done. But the final model is still being trained almost every day to make it better. Check the Download Trained Weights section to get your desired weight files and try the model on you system.
  • Detecting red (circular) stop sign.
  • Detection green (circular) go sign.
  • Train on for night time detection => Working but not perfect. Better updates to come soon.
  • Detecting warningLeft sign.
  • Detecting goLeft sign.
  • Detecting stopleft sign.
  • Detecting warning sign.
  • Carla support => This one is a bit tricky.

Download Trained Weights

Download the trained weights from here.

  • best_model_12.pt: Trained for 67 epochs on all the traffic signs. Current mAP is 0.919

Get the Dataset

This project uses the LISA Traffic Light Dataset.. Download the dataset from Kaggle here.

Steps to Train

  • The current train/test split is 90/10. The input image size is 608x608. So, it might take a lot of time to train if you train on a nominal GPU. I have trained the model on Google Colab with Tesla T4 GPU/P100 GPU. One epoch took with all the classes around 1 hour on a Tesla T4 GPU. Also, check the cfg folder and files before training. You have to use the cfg files corresponding to the number of classes you are training on. If you want to change the number of classes to train on, then you have to change the cfg file too. The current model has been trained on all 6 classes, so, the cfg file is yolov3-spp-6cls.cfg.

  • Prepare the data. Please do take a look at the paths inside the prepare_labels.py file and change them according to your preference and convenience.

    • python prepare_labels.py
  • Create the train and validation text files (Current train/validation split = 90/10).

    • python prepare_train_val.py
  • To train on your own system (The current model has been trained for 30 epochs.)

    • To train from scratch: python train.py --data <your_data_folder>/traffic_light.data --batch 2 --cfg cfg/yolov3-spp-6cls.cfg --epochs 55 --weights "" --name from_scratch
    • Using COCO pretrained weights: python train.py --data <your_data_folder>/traffic_light.data --batch 4 --cfg cfg/yolov3-spp-6cls.cfg --epochs 55 --multi-scale --img-size 608 608 --weights weights/yolov3-spp-ultralytics.pt --name coco_pretrained
    • To resume training: python train.py --data <your_data_folder>/traffic_light.data --batch 2 --cfg cfg/yolov3-spp-6cls.cfg --epochs <num_epochs_must_be_greater_than_previous_training> --multi-scale --img-size 608 608 --resume --weights weights/<your_weight_file>.pt --name <name_to_be_saved_with>
  • Short answer: The image size in cfg file is not used. Only python executables' argument parser img-size argument is used.

To Detect Using the Trained Model

  • Download the weights here first, and paste them under the weights folder.
    • python detect.py --source <path_to_your_test_video_file> --view-img --weights weights/<your_weight_file_name>.pt --img-size 608

References

Articles / Blogs / Tutorials

Papers

GitHub

Dataset

Image / Video Credits

Releases

No releases published

Packages

No packages published