Skip to content

Pipeline for vehicle detection tracking and counting using deep neural networks YOLOv4 and DeepSort. • Trained, Hyper-parameters tuned and Evaluated Detection and tracking Neural Networks on DETRAC dataset.

Notifications You must be signed in to change notification settings

Younes43/Vehicule_Detection_Tracking

Repository files navigation

Vehicle Detection and Tracking using DeepSort Algorithm and YOLOv4

This GitHub repository contains code for training and testing a vehicle detection and tracking system using DeepSort algorithm and YOLOv4 object detector. The repository provides step-by-step instructions on how to prepare the data, train the models, and test the models for vehicle detection and tracking.

Modules Diagram

Detection and Tracking

Getting Started

Please Make sure to clone this github repository directly in the home directory

git clone https://github.com/Younes43/Vehicule_Detection_Tracking.git

The dataset used in this project is the DETRAC Dataset https://detrac-db.rit.albany.edu/download.

All the data is already downloaded and stored in the ~/data directory in the home directory

Data Preparation

Please Reffer to this repository - DETRAC Tools for detailed documentation on the data preparation for the DETRAC Dataset, And follow the commands bellow to prepare the dataset.

DeepSort Data Preparation

  1. Activate the virtual environment for YOLOv4 GPU.
conda activate yolov4-gpu
  1. Navigate to the detrac_tools directory.
cd ~/Vehicule_Detection_Tracking/Multi-Camera-Live-Object-Tracking/detrac_tools
  1. Run the crop_dataset.py script to prepare the DeepSort dataset.
python crop_dataset.py --DETRAC_images ../../../data/Insight-MVT_Annotation_Train/ --DETRAC_annots ../../../data/DETRAC-Train-Annotations-XML-v3/ --output_train ./Detrac_deepsort/bounding_box_train/ --occlusion_threshold=0.6 --truncation_threshold=0.6 --occurrences=50

YOLOv4 Preparation

  1. Navigate to the detrac_tools directory.
cd ~/Vehicule_Detection_Tracking/Multi-Camera-Live-Object-Tracking/detrac_tools
  1. Run the detrac_to_yolo.py script to prepare the YOLOv4 dataset.
python detrac_to_yolo.py --DETRAC_images ../../../data/Insight-MVT_Annotation_Train/ --DETRAC_annots ../../../data/DETRAC-Train-Annotations-XML-v3/ --output_train DETRAC_YOLO_training/ --occlusion_threshold=0.6 --truncation_threshold=0.6
  1. Copy the produced files into Yolo directory
cp train.txt valid.txt detrac_classes.names DETRAC.data ~/Vehicule_Detection_Tracking/darknet/data/
cp yolov4-obj.cfg ~/data/yolov4.conv.137 ~/Vehicule_Detection_Tracking/darknet/cfg/

DeepSort Training

Please Reffer to this repository - DeepSort: Cosine Metric Learning for detailed documentation on DeepSort Training , And follow the commands bellow to train DeepSort Model.

  1. Activate the virtual environment tf15-gpu
conda activate tf15-gpu
  1. Navigate to the cosine_metric_learning directory.
cd ~/Vehicule_Detection_Tracking/cosine_metric_learning/
  1. Run the train_market1501.py script to train the DeepSort model.
python train_market1501.py  --dataset_dir=../Multi-Camera-Live-Object-Tracking/detrac_tools/Detrac_deepsort/  --loss_mode=cosine-softmax  --log_dir=./output/Detrac/  --run_id=cosine-softmax
  1. Open a new terminal and start TensorBoard for visualizing the training progress.
tensorboard --logdir ./output/Detrac/cosine-softmax/ --host=0.0.0.0 --port 6006
  1. Open a web browser and go to http://localhost:6006/ to view the TensorBoard.

DeepSort Model Evaluation

  1. Open a new terminal and run the evaluation script for the DeepSort model.
CUDA_VISIBLE_DEVICES="" python train_market1501.py  --mode=eval  --dataset_dir=../Multi-Camera-Live-Object-Tracking/detrac_tools/Detrac_deepsort/  --loss_mode=cosine-softmax  --log_dir=./output/Detrac/  --run_id=cosine-softmax  --eval_log_dir=./eval_output/Detrac
  1. Open a new terminal and start TensorBoard for visualizing the evaluation results.
tensorboard --logdir ./eval_output/Detrac/cosine-softmax/ --host=0.0.0.0 --port 6007
  1. Open a web browser and go to http://localhost:6007/ to view the TensorBoard.

DeepSort Model export

To export your trained model for use with the deep_sort tracker, run the following command and change model.ckpt-47054 to your latest checkpoint that you can find in output/Detrac/cosine-softmax/ :

python train_market1501.py --mode=freeze --restore_path=output/Detrac/cosine-softmax/model.ckpt-47054

This will create a detrac-deepsort.pb file which can be supplied to Deep SORT.

YOLOv4 Training

Please Reffer to this repository - YOLOv4 Darknet for more detailed documentation on Yolov4 Darknet Training , And follow the commands bellow to train DeepSort Model.

  1. Navigate to the darknet directory
cd ~/Vehicule_Detection_Tracking/darknet
  1. Compile darknet by runing the following commands
make clean
rm -rf build_release
mkdir build_release
cd build_release
cmake ..
cmake --build . --target install --parallel 8
  1. Launch the training of YOLOv4 using the command
./darknet detector train data/DETRAC.data cfg/yolov4-obj.cfg cfg/yolov4.conv.137 -dont_show -mjpeg_port 8090 -map
  1. On a browser open the url http://localhost:8090/ or http://127.0.0.1:8090/ to visualize the Loss Curve

Testing yolov4+deepsort

  1. Activate the virtual environment for YOLOv4 GPU.
conda activate yolov4-gpu
  1. Convert darknet weights to tensorflow model
python save_model.py --model yolov4 --weights ../darknet/backup/yolov4-obj_best.weights --output ./checkpoints/yolov4
  1. Run yolov4 deep sort object tracker on video
python object_tracker.py --video ./data/video/cars.mp4 --output ./outputs/cars_output.avi --yolo_weights ./checkpoints/yolov4 --deep_sort_weights ../cosine_metric_learning/detrac-deepsort.pb  --dont_show
  1. You will find the ouptut video in ./outputs/

Evaluation on Detrac Test Set

The Directory /evaluation contains scripts for preparing and runing the Evaluation

  1. Activate the virtual environment yolov4-gpu if it is not already done .
conda activate yolov4-gpu
  1. Run yolov4 deepsort object tracker on a video from the test Set (In this Example we ran the evaluationon the video MVI_40712.mp4 you should change 'MVI_40712' to the name of the video you want to evaluate the model on )
python object_tracker.py --video ../evaluation/MVI_40712.mp4 --output ./outputs/MVI_40712_output.avi --output_file MVI_40712_output.csv --yolo_weights ./checkpoints/yolov4 --deep_sort_weights ../cosine_metric_learning/detrac-deepsort.pb  --dont_show 

You can Check visualy the resulting video in ./outputs/MVI_40712_output.avi

  1. Prepare the files we need for evaluation
python prepare_evaluation.py --gt_xml MVI_40712.xml --gt_csv MVI_40712.csv --model_out ../yolov4-deepsort/MVI_40712_output.csv --model_out_filtered MVI_40712_output_filtered.csv

MVI_40712.xml is the annotation file for the video MVI_40712.mp4

  1. Run the Evaluation
python run_evaluation.py --gt MVI_40712.csv --pred MVI_40712_output_filtered.csv 

You should get a table result containing the following metrics :

num_frames : Total Number of frames in the video
Rcll : Recall
Prcn : Precision
GT : Total Ground truth objects
FP : Number of False Positives
FN : Number of False Negatives
IDsw : Identity switches
MOTA : Multi Object Tracking Accuracy

References and Credits

The following GitHub repositories were used in the development of this project:

We would like to thank the authors of these repositories for their contributions to the field of computer vision and object tracking, which greatly aided the development of our project.

Please refer to the respective repositories for more information and details on how to use their code.

About

Pipeline for vehicle detection tracking and counting using deep neural networks YOLOv4 and DeepSort. • Trained, Hyper-parameters tuned and Evaluated Detection and tracking Neural Networks on DETRAC dataset.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published