Skip to content
Tacotron 2 - PyTorch implementation with faster-than-realtime inference
Branch: master
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
filelists changing structure for better organization May 4, 2018
text text/ updating symbols Nov 26, 2018
waveglow @ 4b1001f
.gitmodules adding waveglow submodule Nov 27, 2018
LICENSE Update license such that it appears on repo fron tpage May 4, 2018 adding explanation on training from pre-trained model Mar 16, 2019 adding python files May 3, 2018
demo.wav adding demo.wav file Jun 4, 2018 adding ignore_layers argument to ignore text embedding la… Mar 16, 2019
inference.ipynb patching loss scaler for compatibility with current p… May 15, 2018 rewrite Nov 26, 2018 adding python files May 3, 2018
requirements.txt moving window_sum to cuda if magnitude is cuda Mar 15, 2019
tensorboard.png tensorboard.png: adding tensorboard image May 3, 2018 changing dataloder params given sampler Mar 19, 2019

Tacotron 2 (without wavenet)

PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions.

This implementation includes distributed and fp16 support and uses the LJSpeech dataset.

Distributed and FP16 support relies on work by Christian Sarofeen and NVIDIA's Apex Library.

Visit our website for audio samples using our published Tacotron 2 and WaveGlow models.

Alignment, Predicted Mel Spectrogram, Target Mel Spectrogram




  1. Download and extract the LJ Speech dataset
  2. Clone this repo: git clone
  3. CD into this repo: cd tacotron2
  4. Initialize submodule: git submodule init; git submodule update
  5. Update .wav paths: sed -i -- 's,DUMMY,ljs_dataset_folder/wavs,g' filelists/*.txt
    • Alternatively, set load_mel_from_disk=True in and update mel-spectrogram paths
  6. Install PyTorch 1.0
  7. Install python requirements or build docker image
    • Install python requirements: pip install -r requirements.txt


  1. python --output_directory=outdir --log_directory=logdir
  2. (OPTIONAL) tensorboard --logdir=outdir/logdir

Training using a pre-trained model

Training using a pre-trained model can lead to faster convergence
By default, the dataset dependent text embedding layers are ignored

  1. Download our published Tacotron 2 model
  2. python --output_directory=outdir --log_directory=logdir -c --warm_start

Multi-GPU (distributed) and FP16 Training

  1. python -m multiproc --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True

Inference demo

  1. Download our published Tacotron 2 model
  2. Download our published WaveGlow model
  3. jupyter notebook --ip= --port=31337
  4. Load inference.ipynb

N.b. When performing Mel-Spectrogram to Audio synthesis, make sure Tacotron 2 and the Mel decoder were trained on the same mel-spectrogram representation.

Related repos

WaveGlow Faster than real time Flow-based Generative Network for Speech Synthesis

nv-wavenet Faster than real time WaveNet.


This implementation uses code from the following repos: Keith Ito, Prem Seetharaman as described in our code.

We are inspired by Ryuchi Yamamoto's Tacotron PyTorch implementation.

We are thankful to the Tacotron 2 paper authors, specially Jonathan Shen, Yuxuan Wang and Zongheng Yang.

You can’t perform that action at this time.