Skip to content
Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations and occlusions. We propose a deep cascaded multi-task framework which boost up the detection performance. In particular, our framework leverages a cascaded architecture with three stages of carefully designed deep convolutional networks …
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
images
prepare_data
pretrained
save_model
src
README.md
sample.txt
test_img.py
tools.py

README.md

WIDER-Face-Detection using MTCNN

This is a tensorflow implementation of MTCNN for both training and testing of WIDER Face Detection.

Requirement

  1. Ubuntu 14.04 or 16.04 or Mac 10.*
  2. tensorflow 1.3 && python3.6: https://github.com/tensorflow/tensorflow
  3. opencv 3.0 for python3.6 pip install opencv-python
  4. numpy 1.13 pip install numpy

Prepare Data

notice: You should be at ROOT_DIR/prepare_data/ if you want to run the following command.

  • Step1. Download Wider Face Training part only from Official Website and unzip to replace WIDER_train

  • Step2. Run python gen_shuffle_data.py 12 to generate 12net training data. Besides, python gen_tfdata_12net.py provide you an example to build tfrecords file. Remember changing and adding necessary params.

  • Step3. Run python tf_gen_12net_hard_example.py to generate hard sample. Run python gen_shuffle_data.py 24 to generate random cropped training data. Then run python gen_tfdata_24net.py to combine these output and generate tfrecords file.

  • Step4. Similar to last step. Run python gen_24net_hard_example.py to generate hard sample. Run python gen_shuffle_data.py 48 to generate random cropped training data. Then run python gen_tfdata_48net.py to combine these output and generate tfrecords file.

Training Example

notice: You should be at ROOT_DIR/ if you want to run the following command.

if you have finished step 2 above, you can run python src/mtcnn_pnet_test.py to do Pnet training. Similarly, after step 3 or step 4, you can run python src/mtcnn_rnet_test.py or python src/mtcnn_onet_test.py to train Rnet and Onet respectively.

Testing Example

notice: You should be at ROOT_DIR/ if you want to run the following command.

You can run python test_img.py YOUR_IMAGE_PATH --model_dir ./save_model/all_in_one/ --save_image True --save_name images/sample_test.jpg --save_file sample.txt to test mtcnn with the provided model. You can also provide your own training model directory to do the test or use the new_saver model, trained with less no. of epochs. If there are multiple models in the directory, the program will automatically choose the model with the maximum iterations.

Results

sample.jpg

Reference

[1] MTCNN paper link: Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks

[2] MTCNN official code: MTCNN with Caffe

You can’t perform that action at this time.