Pytorch implementation of "M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System", IEEE TDSC 2024.
Illustration of M3FAS system. The mobile device employs the front camera to capture the input RGB face photo. Meanwhile, the top speaker emits a customized acoustic signal, and the microphone collects the reflected signal modulated by the live/spoof face. The captured face picture and acoustic recording are sequentially fed forward to M3FAS for final decision-making.https://drive.google.com/drive/folders/147zfFSMcHz6NWeWx-ZanF3ogIm7ClTrT?usp=sharing
- Download Echoface-Spoof database.
- Implement "python train_cross_device.py", "python train_cross_env.py", or "python train_cross_id.py" for reproducing the results. (change the training, validation, and test csv paths for different settings.)
- Download pretrained models for inference.
See environment.txt
@article{kong2024m,
title={M $\^{}$\{$3$\}$ $ FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System},
author={Kong, Chenqi and Zheng, Kexin and Liu, Yibing and Wang, Shiqi and Rocha, Anderson and Li, Haoliang},
journal={IEEE Transactions on Dependable and Secure Computing},
year={2024},
publisher={IEEE}
}