Skip to content
Tacotron 2 - PyTorch implementation with faster-than-realtime inference
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
filelists changing structure for better organization May 4, 2018
text text/symbols.py: updating symbols Nov 26, 2018
waveglow @ 4b1001f
.gitmodules adding waveglow submodule Nov 27, 2018
Dockerfile
LICENSE Update license such that it appears on repo fron tpage May 4, 2018
README.md README.md: adding explanation on training from pre-trained model Mar 16, 2019
audio_processing.py adding python files May 3, 2018
data_utils.py
demo.wav adding demo.wav file Jun 4, 2018
distributed.py
fp16_optimizer.py
hparams.py hparams.py: adding ignore_layers argument to ignore text embedding la… Mar 16, 2019
inference.ipynb
layers.py
logger.py
loss_function.py
loss_scaler.py loss_scaler.py: patching loss scaler for compatibility with current p… May 15, 2018
model.py model.py: rewrite Nov 26, 2018
multiproc.py adding python files May 3, 2018
plotting_utils.py
requirements.txt
stft.py stft.py: moving window_sum to cuda if magnitude is cuda Mar 15, 2019
tensorboard.png tensorboard.png: adding tensorboard image May 3, 2018
train.py train.py: changing dataloder params given sampler Mar 19, 2019
utils.py

README.md

Tacotron 2 (without wavenet)

PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions.

This implementation includes distributed and fp16 support and uses the LJSpeech dataset.

Distributed and FP16 support relies on work by Christian Sarofeen and NVIDIA's Apex Library.

Visit our website for audio samples using our published Tacotron 2 and WaveGlow models.

Alignment, Predicted Mel Spectrogram, Target Mel Spectrogram

Pre-requisites

  1. NVIDIA GPU + CUDA cuDNN

Setup

  1. Download and extract the LJ Speech dataset
  2. Clone this repo: git clone https://github.com/NVIDIA/tacotron2.git
  3. CD into this repo: cd tacotron2
  4. Initialize submodule: git submodule init; git submodule update
  5. Update .wav paths: sed -i -- 's,DUMMY,ljs_dataset_folder/wavs,g' filelists/*.txt
    • Alternatively, set load_mel_from_disk=True in hparams.py and update mel-spectrogram paths
  6. Install PyTorch 1.0
  7. Install python requirements or build docker image
    • Install python requirements: pip install -r requirements.txt

Training

  1. python train.py --output_directory=outdir --log_directory=logdir
  2. (OPTIONAL) tensorboard --logdir=outdir/logdir

Training using a pre-trained model

Training using a pre-trained model can lead to faster convergence
By default, the dataset dependent text embedding layers are ignored

  1. Download our published Tacotron 2 model
  2. python train.py --output_directory=outdir --log_directory=logdir -c tacotron2_statedict.pt --warm_start

Multi-GPU (distributed) and FP16 Training

  1. python -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True

Inference demo

  1. Download our published Tacotron 2 model
  2. Download our published WaveGlow model
  3. jupyter notebook --ip=127.0.0.1 --port=31337
  4. Load inference.ipynb

N.b. When performing Mel-Spectrogram to Audio synthesis, make sure Tacotron 2 and the Mel decoder were trained on the same mel-spectrogram representation.

Related repos

WaveGlow Faster than real time Flow-based Generative Network for Speech Synthesis

nv-wavenet Faster than real time WaveNet.

Acknowledgements

This implementation uses code from the following repos: Keith Ito, Prem Seetharaman as described in our code.

We are inspired by Ryuchi Yamamoto's Tacotron PyTorch implementation.

We are thankful to the Tacotron 2 paper authors, specially Jonathan Shen, Yuxuan Wang and Zongheng Yang.

You can’t perform that action at this time.