Skip to content

Real-time video streaming mask detection based on Python. Designed to defeat COVID-19.


Notifications You must be signed in to change notification settings


Folders and files

Last commit message
Last commit date

Latest commit


Repository files navigation

WearMask: Real-time In-browser Face Mask Detection



Please use Python 3.8 with all requirements.txt dependencies installed, including torch>=1.6. Do not use python 3.9.

$ pip install -r requirements.txt


The data has been saved in ./modeling/data/, if you added any extra image and annotation, please re-run the code in 10-preparation-process.ipynb to get the new training set and test set.

The following steps work on Google Colab.

1. Training

Run this code to train the model based on the pretrained weights yolo-fastest.weights from COCO.

$ python3 --cfg yolo-fastest.cfg --data data/ --weights weights/yolo-fastest.weights --epochs 120

The training process would cost several hours. When the training ended, you can use from utils import utils; utils.plot_results() to get the training graphs.

After training, you can get the model weights with its structure yolo-fastest.cfg. You can also use the following code to get the model weights best.weights in Darknet format.

$ python3  -c "from models import *; convert('cfg/yolo-fastest.cfg', 'weights/')"

2. Inference

With the model you got, the inference could be performed directly in this format: python3 --source ... For instance, if you want to use your webcam, please run python3 --source 0.

There are some example cases:


Hint: If you want to convert the model to the ONNX format (Not necessary), please check 20-PyTorch2ONNX.ipynb


The deployment part works based on NCNN and WASM.

1. Pytorch to NCNN

At first, you need to compile the NCNN library. For more details, you can visit Tutorial for compiling NCNN library to find the tutorial.

When the compilation process of NCNN has been completed, you can start to use various tools in the ncnn/build/tools folder to help us convert the model.

For example, you can copy the yolo-fastest.cfg and best.weights files of the darknet model to the ncnn/build/tools/darknet, and use this code to convert to the NCNN model.

./darknet2ncnn yolo-fastest.cfg best.weights yolo-fastest.param yolo-fastest.bin 1

For compacting the model size, you can move the yolo-fastest.param and yolo-fastest.bin to ncnn/build/tools, then run the ncnnoptimize program.

ncnnoptimize yolo-fastest.param yolo-fastest.bin yolo-fastest-opt.param yolo-fastest-opt.bin 65536 


Now you have the yolo-fastest-opt.param and yolo-fastest-opt.bin as our final model. For making it works in WASM format, you need to re-compile the NCNN library with WASM. you can visit Tutorial for compiling NCNN with WASM to find the tutorial.

Then you need to write a C++ program that calls the NCNN model as input the image data and return the model output. The C++ code I used has been uploaded to the facemask-detection repository.

Compile the C++ code by emcmake cmake and emmake make, you can get the yolo.js, yolo.wasm, yolo.worker.js and These files are the model in WASM format.

3. Build webpage

After establishing the webpage, you can test it locally with the following steps in the facemask-detection repository:

  1. start a HTTP server python3 -m http.server 8888
  2. launch google chrome browser, open chrome://flags and enable all experimental webassembly features
  3. re-launch google chrome browser, open, and test it on one frame.
  4. re-launch google chrome browser, open, and test it by webcam.

To publish the webpage, you can use Github Pages as a free server. For more details about it, please visit


The modeling part is modified based on the code from Ultralytics. The model used is modified from the Yolo-Fastest model shared by dog-qiuqiu. Thanks to nihui, the author of NCNN, for her help in the NCNN and WASM approach.


    title={WearMask: Fast in-browser face mask detection with serverless edge computing for covid-19}, 
    journal={Electronic Imaging}, 
    author={Wang, Zekun and Wang, Pengwei and Louis, Peter C. and Wheless, Lee E. and Huo, Yuankai},