Skip to content
The first real-world attack on MTCNN face detetction system to date
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
input_img
mtcnn
output_img
utils
weights
LICENSE
README.md
adversarial_gen.py

README.md

Real-world attack on MTCNN face detection system

By Edgar Kaziakhmedov, Klim Kireev, Grigorii Melnikov, Mikhail Pautov and Aleksandr Petiushko

This is the code for the research article. The video is available here.

Abstract

Recent studies proved that deep learning approaches achieve remarkable results on face detection task. On the other hand, the advances gave rise to a new problem associated with the security of the deep convolutional neural network models unveiling potential risks of DCNNs based applications. Even minor input changes in the digital domain can result in the network being fooled. It was shown then that some deep learning-based face detectors are prone to adversarial attacks not only in a digital domain but also in the real world. In the paper, we investigate the security of the well-known cascade CNN face detection system - MTCNN and introduce an easily reproducible and a robust way to attack it. We propose different face attributes printed on an ordinary white and black printer and attached either to the medical face mask or to the face directly. Our approach is capable of breaking the MTCNN detector in a real-world scenario.

The repo

The repository is organized as follows:

  • input_img stores all images to be used for training, should be colored with patch markers. A row in the grid must be same-colored. The color difference between the neighbouring marker rows must not be greater than 1;
  • mtcnn provides with public FaceNet implementation of MTCNN;
  • utils contains multi-patch manager;
  • weights weights for MTCNN sub-networks taken from the public FaceNet implementation;
  • output_img all generated patches will be stored here (you also can try to convert it to B/W before printing).

The attack is implemented in adversarial_gen.py source file, in order to train the patches follow the guideline:

  1. Set images (at least 5-6);
  2. Specify patches parameters;
  3. Specify losses.

The rest of the code is well-documented.

NOTE: paste yout own TensofFlow implementation of resize_area_batch function (INTER_AREA resize algorithm)

Citation

@article{kaziakhmedov2019real,
  title={Real-world attack on MTCNN face detection system},
  author={Kaziakhmedov, Edgar and Kireev, Klim and Melnikov, Grigorii and Pautov, Mikhail and Petiushko, Aleksandr},
  journal={arXiv preprint arXiv:1910.06261},
  year={2019}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.

You can’t perform that action at this time.