New Dynamic Sparse Local Patch Transformer (DSLPT) is released at Here.
training code is released at Here.
PyTorch evaluation code and pretrained models for SLPT (Sparse Local Patch Transformer).
Install python dependencies:
pip3 install -r requirements.txt
-
Download and process WFLW dataset
- Download WFLW dataset and annotation from Here.
- Unzip WFLW dataset and annotations and move files into
./dataset
directory. Your directory should look like this:SLPT └───Dataset │ └───WFLW │ └───WFLW_annotations │ └───list_98pt_rect_attr_train_test │ │ │ └───list_98pt_test └───WFLW_images └───0--Parade │ └───...
-
Download pretrained model from Google Drive.
- WFLW
Model Name NME (%) FR0.1 (%) AUC0.1 download link 1 SLPT-6-layers 4.143 2.760 0.595 download 2 SLPT-12-layers 4.128 2.720 0.596 download Put the model in
./Weight
directory. -
Test
python test.py --checkpoint=<model_name> For example: python test.py --checkpoint=WFLW_6_layer.pth
Note: if you want to use the model with 12 layers, you need to change
_C.TRANSFORMER.NUM_DECODER
for 6 to 12 in./Config/default.py
.
We also provide a video demo script.
- Download face detector, copy the weight
yunet_final.pth
to./Weight/Face_Detector/
python Camera.py --video_source=<Video Path>
If you find this work or code is helpful in your research, please cite:
@inproceedings{SLPT,
title={Sparse Local Patch Transformer for Robust Face Alignment and Landmarks},
author={Jiahao Xia and Weiwei Qu and Wenjian Huang and Jianguo Zhang and Xi Wang and Min Xu},
booktitle={CVPR},
year={2022}
}
SLPT is released under the GPL-2.0 license. Please see the LICENSE file for more information.
- This repository borrows or partially modifies the models from HRNet and DETR
- The video demo employs the libfacedetection as the face detector.
- The test videos are provided by DFEW