Audio-Visual Voice Biometrics is a audio-visual speaker recognition task, which leverages auditory and visual speech in a video. The portrait- and linguistic-based speaker characteristics are extracted via the temporal dynamics modeling. It involves the conventional speaker recognition and lip biometrics tasks.
This is the official implementation of ICASSP23 paper CROSS-MODAL AUDIO-VISUAL CO-LEARNING FOR TEXT-INDEPENDENT SPEAKER VERIFICATION.
Please turn to the ./preprocessing to extract lips for the training and test datasets.
After getting the lip data of training sets and test sets, you could run ./main_audiovisuallip_DATASET_CM.py for training and testing with only switching the stage in the code. When doing this, be sure to change the ./conf/config_audiovisuallip_DATASET_new.yaml to your own configuration.
![](https://private-user-images.githubusercontent.com/45690014/241733104-d70da2de-c2f8-417f-999d-2d9778ba719a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjEwODc5NDAsIm5iZiI6MTcyMTA4NzY0MCwicGF0aCI6Ii80NTY5MDAxNC8yNDE3MzMxMDQtZDcwZGEyZGUtYzJmOC00MTdmLTk5OWQtMmQ5Nzc4YmE3MTlhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE1VDIzNTQwMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTNmZDlhNjc2YmFmNzVjYzQyMWQyMjJkYzFhZThkOWJhMDZmNTZjODQ2YjhjYzU5MGRiMDNmY2NiOTEyNmM2OGQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.1eLoCoZvZwAkZYhMG7BPQJ9m1pXLNVEOzs5bSUijHz8)
![](https://private-user-images.githubusercontent.com/45690014/241733117-f548da67-f55a-4af0-9ec4-8a85b7ceff73.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjEwODc5NDAsIm5iZiI6MTcyMTA4NzY0MCwicGF0aCI6Ii80NTY5MDAxNC8yNDE3MzMxMTctZjU0OGRhNjctZjU1YS00YWYwLTllYzQtOGE4NWI3Y2VmZjczLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE1VDIzNTQwMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTZmZGI0YWRiMDhkYTU5ZTUyMmE5NmNhMzZiZmU3NDcwOGRjNWJiMDlmNTcyN2Y5ZjY5YWY5ZWE4ZjBmMWY0YjcmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.S8bvyFbR9wltaFaMBdaaNIw-9nZ9nRFojsb5KMEggbU)
You could find the pretrained audio-only and visual-only model here: https://drive.google.com/drive/folders/1IalsNtmDH-qFnfgmn_O92J1MUHCaQepl?usp=sharing
AVLip:
@inproceedings{liu2023cross,
title={Cross-Modal Audio-Visual Co-Learning for Text-Independent Speaker Verification},
author={Liu, Meng and Lee, Kong Aik and Wang, Longbiao and Zhang, Hanyi and Zeng, Chang and Dang, Jianwu},
booktitle={ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={1--5},
year={2023},
organization={IEEE}
}
DeepLip:
@inproceedings{liu2021deeplip,
title={DeepLip: A Benchmark for Deep Learning-Based Audio-Visual Lip Biometrics},
author={Liu, Meng and Wang, Longbiao and Lee, Kong Aik and Zhang, Hanyi and Zeng, Chang and Dang, Jianwu},
booktitle={2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
pages={122--129},
year={2021},
organization={IEEE}
}