Skip to content

shivansh-mishraa/Text-Independent-Speaker-Verification

 
 

Repository files navigation

Text Independant Speaker Verification Using GE2E Loss

Tensorflow implementation of Text Independent Speaker Verification based on Generalized End-to-End Loss for Speaker Verification and Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis

Data

Both papers above used internal data which consist of 36M utterances from 18K speakers. In this repository, the original dataset was substituted with the combination of VoxCeleb1,2 and LibriSpeech. All of them are available for free. The whole data of those 3 have 10% EER whereas the original one has 5% EER according to Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis. Below are links for the data
Downloading data will be soon added to preprocess.py. Before that, manually download dataset using the links below

LibriSpeech

VoxCeleb1,2

Prerequisites

Use requirements.txt for installing python packages.

pip install -r requirements.txt

  • Python
  • Tensorflow-gpu 1.6.0
  • NVIDIA GPU + CUDA 9.0 + CuDNN 7.0

Training

1. Preprocess wav data into spectrogram

  • VoxCeleb1 each has a tree structure like below
wav_root - speaker_id - video_clip_id - 00001.wav
                                      - 00002.wav
                                      - ...
                                      
  • VoxCeleb2 each has a tree structure like below
wav_root - speaker_id - video_clip_id - 00001.m4a
                                      - 00002.m4a
                                      - ...
                                      
wav_root - speaker_id - speaker_id-001.wav
                      - speaker_id-002.wav
                      - ...
  • Run preprocess.py
python preprocess.py --in_dir /home/ninas96211/data/libri --pk_dir /home/ninas96211/data/pickle --data_type libri
python preprocess.py --in_dir /home/ninas96211/data/vox1 --pk_dir /home/ninas96211/data/pickle --data_type vox1
python preprocess.py --in_dir /home/ninas96211/data/vox2 --pk_dir /home/ninas96211/data/pickle--data_type vox2

2. Train

  • Run train.py
python train.py --in_dir /home/ninas96211/data/wavs_pickle --ckpt_dir ./ckpt

3. Infer

  • Using data_gen.sh, create a directory for test where wavs have names like [speaker_id]_[video_clip_id]_[wav_number].wav
bash data_gen.sh /home/ninas96211/data/test_wav/id10275/CVUXDNZzcmA/00002.wav ~/data/test_wav_set
  • Run inference.py
python inference.py --in_wav1 /home/ninas96211/data/test_wav_set/id10309_pwfqGqgezH4_00004.wav --in_wav2 /home/ninas96211/data/test_wav_set/id10296_f_k09R8r_cA_00004.wav --ckpt_file ./ckpt/model.ckpt-35000

Results

  • Similarity Matrix

alt text

  • Speaker Verification Task

After training 35000 steps using vox1 dataset, this model caught similarity between two waves from the same video clip, however in other cases, it was not successful. Currently this model using all 3 datasets(libri,vox1,vox2) is training and the result will be posted soon.

Current Issues

  • @jaekukang cloned this repository and he trained this model successfully. In inference.py, however, he found a bug. I fixed the bug.

About

Text Independent Speaker Verification Using GE2E Loss

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.7%
  • Shell 1.3%