Deep face segmentation in extremely hard conditions
Switch branches/tags
Nothing to show
Clone or download

README.md

Deep face segmentation in extremely hard conditions

alt text
COFW sample images segmented using our method.

Yuval Nirkin, Iacopo Masi, Anh Tuan Tran, Tal Hassner, and Gerard Medioni.

News (10/07/18)

  • New FCN model released for lower resolution images (300X300), trained without augmentations. Useful if you have limited GPU memory.
  • A better performing and more efficient U-Net model will be released soon, including training and inference scripts using PyTorch.

Overview

This project provides an interface for face segmentation using Caffe with a fully convolutional neural network. The network was trained on IARPA Janus CS2 dataset (excluding subjects that are also in LFW) using a novel process for collecting ground truth face segmentations, involving our tool for semi-supervised Face video segmentation. Additional synthetic images were generated by augmenting hands from the EgoHands dataset, and augmenting 3D models of glasses and microphones.

If you find this code useful, please make sure to cite our paper in your work:

Yuval Nirkin, Iacopo Masi, Anh Tuan Tran, Tal Hassner, Gerard Medioni, "On Face Segmentation, Face Swapping, and Face Perception", IEEE Conference on Automatic Face and Gesture Recognition (FG), Xi'an, China on May 2018

Please see project page for more details, more resources and updates on this project.

Dependencies

Library Minimum Version Notes
Boost 1.47 Optional - For command line tools
OpenCV 3.0
Caffe 1.0 ☕️

Installation

  • Use CMake and your favorite compiler to build and install the library.
  • Download the face_seg_fcn8s.zip or face_seg_fcn8s_300_no_aug.zip and extract to "data" in the installation directory.
  • Add "bin" in the installation directory to path.

Usage

  • For using the library's C++ interface, please take a look at the Doxygen generated documentation.
  • For python go to "interfaces/python" in the installation directory and run:
python face_seg.py
  • For running the segmentation on a single image:
cd path/to/face_segmentation/bin
face_seg_image ../data/images/Alison_Lohman_0001.jpg -o . -m ../data/face_seg_fcn8s.caffemodel -d ../data/face_seg_fcn8s_deploy.prototxt
  • For running the segmentation on all the images in a directory:
cd path/to/face_segmentation/bin
face_seg_batch ../data/images -o . -m ../data/face_seg_fcn8s.caffemodel -d ../data/face_seg_fcn8s_deploy.prototxt
  • For running the segmentation on a list of images, first prepare a file "img_list.txt", in which each line is a path to an image and call the following command:
cd path/to/face_segmentation/bin
face_seg_batch img_list.txt -o . -m ../data/face_seg_fcn8s.caffemodel -d ../data/face_seg_fcn8s_deploy.prototxt

Note: The segmentation model was trained by cropping the training images using find_face_landmarks. For best results crop the input images the same way, with crop resolution below 350 X 350. A Matlab function is available here.

Important note

In our paper we used a different network for our face segmentation. In the process of converting it to the Caffe model used in our end-to-end face swap distribution we notices some performance drop. We are working to fix this. We therefore ask that you please check here soon for updated on this Caffe model.

Citation

Please cite our paper with the following bibtex if you use our face segmentation network:

@inproceedings{nirkin2018_faceswap,
      title={On Face Segmentation, Face Swapping, and Face Perception},
      booktitle = {IEEE Conference on Automatic Face and Gesture Recognition},
      author={Nirkin, Yuval and Masi, Iacopo and Tran, Anh Tuan and Hassner, Tal and Medioni, and G\'{e}rard Medioni},
      year={2018},
    }

Related projects

Copyright

Copyright 2017, Yuval Nirkin, Iacopo Masi, Anh Tuan Tran, Tal Hassner, and Gerard Medioni

The SOFTWARE provided in this page is provided "as is", without any guarantee made as to its suitability or fitness for any particular use. It may contain bugs, so use of this tool is at your own risk. We take no responsibility for any damage of any sort that may unintentionally be caused through its use.