Skip to content

bruinxiong/3DDFA_V2

 
 

Repository files navigation

Towards Fast, Accurate and Stable 3D Dense Face Alignment

License GitHub repo size CodeFactor

By Jianzhu Guo, Xiangyu Zhu, Yang Yang, Fan Yang, Zhen Lei and Stan Z. Li. The code repo is maintained by Jianzhu Guo.

demo

[Updates]

  • 2020.9.20: Add features including pose estimation and serializations to .ply and .obj, see pose, ply, obj options in demo.py.
  • 2020.9.19: Add PNCC (Projected Normalized Coordinate Code), uv texture mapping features, see pncc, uv_tex options in demo.py.

Introduction

This work extends 3DDFA, named 3DDFA_V2, titled Towards Fast, Accurate and Stable 3D Dense Face Alignment, accepted by ECCV 2020. The supplementary material is here. The gif above shows a demo of the tracking result. This repo is the official implementation of 3DDFA_V2.

Compared to 3DDFA, 3DDFA_V2 achieves better performance and stability. Besides, 3DDFA_V2 incorporates the fast face detector FaceBoxes instead of Dlib. A simple 3D render written by c++ and cython is also included. If you are interested in this repo, just try it on this google colab! Welcome for valuable issues and PRs 😄

Getting started

Requirements

See requirements.txt, tested on macOS and Linux platforms. Note that this repo uses Python3. The major dependencies are PyTorch, numpy and opencv-python, etc.

Usage

  1. Clone this repo
git clone https://github.com/cleardusk/3DDFA_V2.git
cd 3DDFA_V2
  1. Build the cython version of NMS, and Sim3DR
sh ./build.sh
  1. Run demos
# 1. running on still image, the options include: 2d_sparse, 2d_dense, 3d, depth, pncc, pose, uv_tex, ply, obj
python3 demo.py -f examples/inputs/emma.jpg  # -o [2d_sparse, 2d_dense, 3d, depth, pncc, pose, uv_tex, ply, obj]

# 2. running on videos
python3 demo_video.py -f examples/inputs/videos/214.avi

# 3. running on videos smoothly by looking ahead by `n_next` frames
python3 demo_video_smooth.py -f examples/inputs/videos/214.avi

# 4. running on webcam
python3 demo_webcam_smooth.py

The implementation of tracking is simply by alignment. If the head pose > 90° or the motion is too fast, the alignment may fail. A threshold is used to trickly check the tracking state, but it is unstable.

You can refer to demo.ipynb or google colab for the step-by-step tutorial of running on the still image.

For example, running python3 demo.py -f examples/inputs/emma.jpg -o 3d will give the result below:

demo

Running on webcam will give:

demo

Obviously, the eyes parts are not good.

Features (up to now)

2D sparse 2D dense 3D
2d sparse 2d dense 3d
Depth PNCC UV texture
depth pncc uv_tex
Pose Serialization to .ply Serialization to .obj
pose ply obj

Configs

The default backbone is MobileNet_V1 with input size 120x120 and the default pre-trained weight is weights/mb1_120x120.pth, shown in configs/mb1_120x120.yml. This repo provides another config in configs/mb05_120x120.yml, with the widen factor 0.5, being smaller and faster. You can specify the config by -c or --config option. The released models are shown in the below table. Note that the inference time is evaluated using TensorFlow. The benchmark is unstable across different runtimes or frameworks. However, I believe the onnxruntime should perform best and maybe faster than the reported values.

Model Input #Params #Macs Inference
MobileNet 120x120 3.27M 183.5M ~6.2ms
MobileNet x0.5 120x120 0.85M 49.5M ~2.9ms

FQA

  1. What is the training data?

    We use 300W-LP for training. You can refer to our paper for more details about the training. Since few images are closed-eyes in the training data 300W-LP, the landmarks of eyes are not accurate when closing.

Acknowledgement

Citation

If your work or research benefits from this repo, please cite two bibs below : )

@inproceedings{guo2020towards,
    title =        {Towards Fast, Accurate and Stable 3D Dense Face Alignment},
    author =       {Guo, Jianzhu and Zhu, Xiangyu and Yang, Yang and Yang, Fan and Lei, Zhen and Li, Stan Z},
    booktitle =    {Proceedings of the European Conference on Computer Vision (ECCV)},
    year =         {2020}
}

@misc{3ddfa_cleardusk,
    author =       {Guo, Jianzhu and Zhu, Xiangyu and Lei, Zhen},
    title =        {3DDFA},
    howpublished = {\url{https://github.com/cleardusk/3DDFA}},
    year =         {2018}
}

Contact

Jianzhu Guo (郭建珠) [Homepage, Google Scholar]: jianzhu.guo@nlpr.ia.ac.cn or guojianzhu1994@foxmail.com.

About

The official PyTorch implementation of Towards Fast, Accurate and Stable 3D Dense Face Alignment, ECCV, 2020

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 76.7%
  • C++ 20.2%
  • Jupyter Notebook 2.7%
  • Other 0.4%