The frames captured from the user provided video are then fed into the YOLOv3 or YOLOv4 algorithm, where the vehicles detection is done. With the YOLO detections our algorithm then calculates the center of mass of each bounding box founded for then using the Euclidean method calculate the distance to the closest car. The detected class, index and distance to the closest detected vehicle are respectively shown above each bounding box.
This project was developed and tested for Ubuntu 16.04, 18.04, 20.04 and Arch Linux.
Knowledge in the darknet framework and YOLO object detector is required, further reading can be found here.
The darknet framework present here is a reimplementation based in pjreddie and AlexeyAB work.
The darknet's python wrapper here present is based in madhawav work.
-
GCC/G++: Install Guide
-
OpenCV: Install Guide
-
Nvidia Drivers (only for GPU acceleration): Install Guide
-
CUDA (only for GPU acceleration): Install Guide
-
CuDNN (only for GPU acceleration): Install Guide
This section is a guide to the instalations of a python environment with the requirements of this repository.
First install Anaconda or Miniconda, both of them give you similar results but the latter requires less disk space.
Now create a python virtual environment and install the required packages following the commands. Substitute <environment_name> with a name for your environment
user@computer:~$ conda create -n <environment_name> anaconda python=3
user@computer:~$ conda activate <environment_name>
(<environment_name>) user@computer:~$ conda install -c loopbio -c conda-forge -c pkgw-forge ffmpeg gtk2 opencv numpy scipy matplotlib cython progress pip
(<environment_name>) user@computer:~$ pip install yolo34py-gpu
First clone the [repository](First clone the repository and complie the source code. This can be accomplished by:
user@computer:~ $ git clone https://gitlab.com/helton.maia/proj-cnn-vant
user@computer:~ $ cd proj-cnn-vant/darknet
user@computer:~/proj-cnn-vant/darknet $ make
To use the provided scripts make sure to activate your python environment, that can be acomplished by:
user@computer:~$ conda activate <environment_name>
This script aims to track and detect the safe distance between vehicles. Usage:
(<environment_name>) user@computer:~/proj-cnn-vant $ python safeDistanceTool.py cfg data weight video output [-h] [--debug] [--save-video]
-
Nvidia Drivers (only for GPU acceleration): Install Guide
-
CUDA (only for GPU acceleration): Install Guide
-
CuDNN (only for GPU acceleration): Install Guide
This section is a guide to the instalations of a python environment with the requirements of this repository.
First install Anaconda or Miniconda, both of them give you similar results but the latter requires less disk space.
Now create a python virtual environment and install the required packages following the commands. Substitute <environment_name> with a name for your environment
user@computer:~$ conda create -n <environment_name> anaconda python=3
user@computer:~$ conda activate <environment_name>
(<environment_name>) user@computer:~$ conda install -c loopbio -c conda-forge -c pkgw-forge ffmpeg gtk2 opencv numpy scipy matplotlib progress pip
(<environment_name>) user@computer:~$ pip install tensorflow yolov4
To use the provided scripts make sure to activate your python environment, that can be acomplished by:
user@computer:~$ conda activate <environment_name>
This script aims to track and detect the safe distance between vehicles. Usage:
(<environment_name>) user@computer:~ $ git clone https://github.com/vitoryeso/ECT-proj-cnn-vant.git
(<environment_name>) user@computer:~ $ cd proj-cnn-vant/safe_distance
(<environment_name>) user@computer:~/proj-cnn-vant/safe_distance $ python safeDistanceToolYolov4.py weight video output [--video-save] [-h] [--debug] [--iou-thresh IOU_THRESH] [--score-thresh SCORE_THRESH]
Example:
(<environment_name>) user@computer:~/proj-cnn-vant/safe_distance $ python safeDistanceToolYolov4.py ../yolov4-tiny-608_10000.weights ../demo.mp4 demo_output --save-video
This video can be used for test Safe Distance Tool.
The latest weights used in the work can be downloaded here:
Required arguments:
- cfg: Path to yolo configuration file. (only for YOLOv3)
- data: Path to network data file. (only for YOLOv3)
- weight: Path to the weights file.
- video: Path to source video.
- output: Name of the log file to be produced.
Optional arguments:
- -h, --help: Show this help message and exit
- --debug: Shows a window with debugging information during video classification.
- --save-video: Create a video file with the analysis result.
- --iou-thresh: Set the IOU threshold for consider a detection (only for YOLOv4). Default is 0.4.
- --score-thresh: Set the probability score threshold for consider a detection (only for YOLOv4). Default is 0.6.
Avalible commands:
- space - Pause the video stream
- q, esc - Finish the execution