Skip to content

Tacotron 2 - PyTorch implementation with faster-than-realtime inference

License

Notifications You must be signed in to change notification settings

a10shy/malayalam-tacotron2

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Malayalam TTS using Tacotron 2, WaveGlow and Transfer Learning

Tacotron

PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions.

This implementation includes distributed and automatic mixed precision support and uses the LJSpeech dataset for the base model.

The Malayalam model uses IndicTTS Malayalam Female Dataset as its dataset.

Distributed and Automatic Mixed Precision support relies on NVIDIA's Apex and AMP.

Visit Nvidia's website for audio samples using their published base English Tacotron 2 and WaveGlow models.

Alignment, Predicted Mel Spectrogram, Target Mel Spectrogram

Pre-requisites

  1. NVIDIA GPU + CUDA cuDNN

Setup

  1. Download IndicTTS Malayalam Female Dataset
  2. Clone this repo: git clone https://github.com/parapsychic/malayalam-tacotron2'
  3. CD into this repo: cd tacotron2
  4. Initialize submodule: git submodule init; git submodule update
  5. Update .wav paths: sed -i -- 's,DUMMY,ljs_dataset_folder/wavs,g' filelists/*.txt
    • Alternatively, set load_mel_from_disk=True in hparams.py and update mel-spectrogram paths
  6. Install PyTorch 1.0
  7. Install Apex
  8. Install python requirements or build docker image
    • Install python requirements: pip install -r requirements.txt

Alternatively, run the Malayalam_Tacotron.ipynb file after loading the dataset.

  • The below training instructions are for general Tacotron2 training. We used a pre-trained English Tacotron2 model (given below) to train our model.
  • The code to train the model can be found in the Jupyter Notebook linked above.
  • If you don't want to train your own model, you can download our pre-trained Malayalam Text-to-Speech model.

Training

  1. python train.py --output_directory=outdir --log_directory=logdir
  2. (OPTIONAL) tensorboard --logdir=outdir/logdir

Training using a pre-trained model

Training using a pre-trained model can lead to faster convergence
By default, the dataset dependent text embedding layers are ignored

  1. Download our published Tacotron 2 model
  2. python train.py --output_directory=outdir --log_directory=logdir -c tacotron2_statedict.pt --warm_start

Multi-GPU (distributed) and Automatic Mixed Precision Training

  1. python -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True

Inference demo

  1. Download our published Tacotron 2 model
  2. Download our published WaveGlow model
  3. Run the Inference part of Malayalam_Tacotron.ipynb.

N.b. When performing Mel-Spectrogram to Audio synthesis, make sure Tacotron 2 and the Mel decoder were trained on the same mel-spectrogram representation.

Related repos

WaveGlow Faster than real time Flow-based Generative Network for Speech Synthesis

nv-wavenet Faster than real time WaveNet.

Acknowledgements

This repo was forked from Nvidia's Tacotron2 implementation. The model was retrained to generate Malayalam speech output.

This implementation uses code from the following repos: Keith Ito, Prem Seetharaman as described in our code.

We are inspired by Ryuchi Yamamoto's Tacotron PyTorch implementation.

We are thankful to the Tacotron 2 paper authors, specially Jonathan Shen, Yuxuan Wang and Zongheng Yang.

About

Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 82.6%
  • Python 17.3%
  • Dockerfile 0.1%