Tensorflow implementation of Text Independent Speaker Verification based on Generalized End-to-End Loss for Speaker Verification and Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis
Both papers above used internal data which consist of
36M utterances from 18K speakers.
In this repository, the original dataset was substituted with the combination of VoxCeleb1,2 and LibriSpeech. All of them are available for free.
The whole data of those 3 have 10% EER whereas the original one has 5% EER according to Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis.
Below are links for the data
Downloading data will be soon added to preprocess.py. Before that, manually download dataset using the links below
Use requirements.txt for installing python packages.
pip install -r requirements.txt
- Python
- Tensorflow-gpu 1.6.0
- NVIDIA GPU + CUDA 9.0 + CuDNN 7.0
- VoxCeleb1 each has a tree structure like below
wav_root - speaker_id - video_clip_id - 00001.wav
- 00002.wav
- ...
- VoxCeleb2 each has a tree structure like below
wav_root - speaker_id - video_clip_id - 00001.m4a
- 00002.m4a
- ...
- LibriSpeech has a tree structure like below
To have a tree structre below, LibriSpeech dataset has to be preprocessed before running preprocess.py.
Ref: https://github.com/mozilla/DeepSpeech/blob/master/bin/import_librivox.py
wav_root - speaker_id - speaker_id-001.wav
- speaker_id-002.wav
- ...
- Run preprocess.py
python preprocess.py --in_dir /home/ninas96211/data/libri --pk_dir /home/ninas96211/data/libri_pickle --data_type libri
python preprocess.py --in_dir /home/ninas96211/data/vox1 --pk_dir /home/ninas96211/data/vox1_pickle --data_type vox1
python preprocess.py --in_dir /home/ninas96211/data/vox2 --pk_dir /home/ninas96211/data/vox2_pickle--data_type vox2
- Run train.py
python train.py --in_dir /home/ninas96211/data/wavs_pickle --ckpt_dir ./ckpt
- Using data_gen.sh, create a directory for test where wavs have names like [speaker_id]_[video_clip_id]_[wav_number].wav
bash data_gen.sh /home/ninas96211/data/test_wav/id10275/CVUXDNZzcmA/00002.wav ~/data/test_wav_set
- Run inference.py
python inference.py --in_wav1 /home/ninas96211/data/test_wav_set/id10309_pwfqGqgezH4_00004.wav --in_wav2 /home/ninas96211/data/test_wav_set/id10296_f_k09R8r_cA_00004.wav --ckpt_file ./ckpt/model.ckpt-35000
- Similarity Matrix
- Speaker Verification Task
After training 35000 steps using vox1 dataset, this model caught similarity between two waves from the same video clip, however in other cases, it was not successful. Currently this model using all 3 datasets(libri,vox1,vox2) is training and the result will be posted soon.
- @jaekukang cloned this repository and he trained this model successfully. In inference.py, however, he found a bug. I fixed the bug.