You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project uses a variety of advanced voiceprint recognition models such as EcapaTdnn, ResNetSE, ERes2Net, CAM++, etc. It is not excluded that more models will be supported in the future. At the same time, this project also supports MelSpectrogram, Spectrogram data preprocessing methods
The Pytorch implementation of sound classification supports EcapaTdnn, PANNS, TDNN, Res2Net, ResNetSE and other models, as well as a variety of preprocessing methods.
Verifying the identity of a person from characteristics of the voice independent from language via NVIDIA NeMo models (ECAPA-TDNN, SpeakerNet, TitaNet-L).
Speaker verification of virtual assistants using ECAPA-TDNN model from SpeechBrain toolkit and transfer learning approach emphasizing on inter and intra comparision (text independent and dependent).
This project is a Voice Identification System built using Python, leveraging SpeechBrain and ECAPA-TDNN for speaker verification. The system identifies users by comparing their voice embeddings with stored data, providing a secure and efficient method for user recognition.