Skip to content

ligaripash/MuSiCa

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Knowing When to Quit: Selective Cascaded Regression withPatch Attention for Real-Time Face Alignment

License: MIT

Introduction

This is an implementation of the fast and accurate face alignment algorithm presented in the paper: Knowing When to Quit: Selective Cascaded Regression with Patch Attention for Real-Time Face Alignment

Abstract

Facial landmarks (FLM) estimation is a critical component in many face-related applications. In this work, we aim to optimize for both accuracy and speed and explore the trade-off between them. Our key observation is that not all faces are created equal. Frontal faces with neutral expressions converge faster than faces with extreme poses or expressions. To differentiate among samples, we train our model to predict the regression error after each iteration. If the current iteration is accurate enough, we stop iterating, saving redundant iterations while keeping the accuracy in check. We also observe that as neighboring patches overlap, we can infer all facial landmarks (FLMs) with only a small number of patches without a major accuracy sacrifice. Architecturally, we offer a multi-scale, patch-based, lightweight feature extractor with a fine-grained local patch attention module, which computes a patch weighting according to the information in the patch itself and enhances the expressive power of the patch features. We analyze the patch attention data to infer where the model is attending when regressing facial landmarks and compare it to face attention in humans. Our model runs in real-time on a mobile device GPU, with 95 Mega Multiply-Add (MMA) operations, outperforming all state-of-the-art methods under 1000 MMA, with a normalized mean error of 8.16 on the 300W challenging dataset

Installation

The codebases are built on top of MDM

Steps

Run docker:
  1. Download the docker image from here
  2. Load the image: nvidia-docker load < kwtc_docker_image.tar.gz
  3. Run the image: nvidia-docker run -v your_download_dir:dest_dir -it kwtc:new /bin/bash (The -v is needed to copy files to your container)
git clone:
  1. Inside the container: cd /opt/kwtc/
  2. git clone https://github.com/ligaripash/MuSiCa.git
WFLW:
  1. Download the WFLW dataset from here.
  2. copy WFLW.tar.gz to /opt/kwtc/
  3. gunzip WFLW.tar.gz
  4. tar xvf WFLW.tar
Models:
  1. Download the model from here.
  2. Copy models.tar.gz to /opt/kwtc/
  3. gunzip models.tar.gz
  4. tar xvf models.tar
Run inference on a pretrained model with 49 patches:
  1. cd MuSica
  2. python inference.py ( inference.json contains the inference paramters). The output is written to /opt/kwtc/output/
  3. Render the calculated landmarks on image: python show_flm_on_image.py ( the output images are written to /tmp/ )
Evaluate the inference against WFLW ground-truth (expression subset)
  1. python evaluate.py (evaluate.json contains the evluation parameters). You should get 0.088 average normalized error.
To train the model:
  1. python train.py (train_params.py contain the trainig parameters)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published