Skip to content

jwdinius/yolov4-mask-detector

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Real-Time Mask Detection with YOLOv4 and OpenCV

demo

About

Demonstration of how to build a real-time, multi-threaded mask detector using OpenCV's Deep Neural Network module for inference with YOLOv4. Darknet is used for training the network. The sections below outline how to reproduce your own demonstration of the gif above.

The hardware setup used was:

Dependencies (just use Docker)

The following NVIDIA dependencies were used:

  • NVIDIA driver version == 450.80.02
  • CUDA toolkit version == 10.2
  • cuDNN version == 8.0.4.30 You will need an NVIDIA developer account to download this

If you have Docker installed with the nvidia-runtime configured properly, you can use the Dockerfile provided with this repo to setup the development environment. I highly recommend this approach.

If you choose not to go the Docker route, you can follow the install steps from the Dockerfile to configure your environment.

Training

I provide links to the final trained weights for this demo here; if you want to go straight to inference, skip this section and start from there.

Download the dataset

Dataset. This dataset has been prepared with the prerequisite YOLO format already satisfied.

Prepare the dataset

There are python scripts provided to do final setup of the training and validation sets:

Train the network

If you wish to train your network end-to-end, you can follow the instructions from AlexeyAB's Darknet fork

At the end of training, you should see an image like the following:

training

Download pre-trained task-specific weights

I've already trained the network on the dataset linked above. If you want to just use these and skip training, here are the links:

Build

After you've cloned this repository, you can build the code with:

mkdir build && cd build && cmake .. && make -j2

Running the app

View available command-line options

# from build directory
./maskDetector -h

will show all available options, with descriptions. There are three input options available:

  • Running on sample image - you can pass a relative path to an image file with extension png or jpg
  • Running on sample video - you can pass a relative path to a video file with extension mp4
  • Running on webcam - you can pass a dev number (e.g. 0) to point to the resolved serial device for your webcam

Running on sample image

Download sample image

# from build directory
# - use "-o" option if you wish to create a video showing the inferred bounding boxes
./maskDetector -i={rel-path-to-image-file} --config=../data/yolo/{desired-cfg-file} --weights={rel-path-to-weights-file} --classes=../data/yolo/class.names {-o={rel-path-to-output-video-file}}

Running on sample video

Download sample video.

# from build directory
# - use "-o" option if you wish to create a video showing the inferred bounding boxes
./maskDetector -i={rel-path-to-video-file} --config=../data/yolo/{desired-cfg-file} --weights={rel-path-to-weights-file} --classes=../data/yolo/class.names {-o={rel-path-to-output-video-file}}

Running on webcam

# from build directory
# - use "-o" option if you wish to create a video showing the inferred bounding boxes
./maskDetector --config=../data/yolo/{desired-cfg-file} --weights={rel-path-to-weights-file} --classes=../data/yolo/class.names {-o={rel-path-to-output-video-file}}

Expected output GUI

output

You can adjust the confidence and non-maximum suppression threshold values during app execution. Experiment away!

Other References

About

Sample project demonstrating how to setup a multi-threaded linux application for real-time mask detection on input images, video playback, or streaming cameras.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published