An unofficial implementation of Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, Baining Guo: Face X-Ray for More General Face Forgery Detection. CVPR 2020.
install dependancies:
pip install -r requirements.txt
You can extract raw faces, manipulated faces, manipulation mask and face landmarks (saved as .npy
file) of FaceForensics++ dataset by:
python extract_faces.py -d ./dataset/FaceForensics++ -o ./dataset/FaceForensics++/extract -c raw
Notice the dataset directory should be organized as follows:
And the directory "manipulated_sequences" and "original_sequences" are organized as the default structure of FaceForensics++ download script.
Extracted real faces:
Extracted manipulated faces of Deepfakes:
Extracted manipulation mask of Deepfakes:
You can train the model by running:
python train.py -c ./experiments/default.yaml --hrnet_model ./HRNet/pretrained/hrnetv2_pretrained.pth
Experiment configuration can be modified in ./experiments/default.yaml
The HRNet model is borrowed from HRNet official repository.
The blended image in the paper is generated online during training(dataset.py
), generation code is borrowed from the author's repository.
You can evaluate the trained model by running:
python evaluate.py --ckpt_dir ./result/xxx -d Deepfakes -c raw -r ./dataset/FaceForensics++ -o ./log
You can use the trained model to detect a video by running:
python detect_video.py --ckpt_dir ./result/result_xxxx -v myVideo.mp4 -o ./detect_result
The detect script is borrowed from FaceForensics++ official repository.
-
HRNet pretrained model
-
My trained Face X-Ray model
-
dlib model
I only experimented on the FaceForensics++ dataset. Due to limited time, I did not do more exploration on the details of data preprocessing and training process, so the AUC is relatively low compared to the data in the paper.
Model | Training Set | DF Test Set AUC | F2F Test Set AUC | FS Test Set AUC | NT Test Set AUC |
---|---|---|---|---|---|
Face X-ray | Blended Images | 99.252 | 96.276 | 98.666 | 96.408 |