Skip to content

πŸ€–πŸ’¬ Implementation of a non-autoregressive Transformer based neural network for text to speech.

License

Notifications You must be signed in to change notification settings

lampv33/TransformerTTS

Β 
Β 

Repository files navigation



A Text-to-Speech Transformer in TensorFlow 2

Implementation of a non-autoregressive Transformer based neural network for Text-to-Speech (TTS).
This repo is based on the following papers:

Our pre-trained LJSpeech models are compatible with the pre-trained vocoders from:

Non-Autoregressive

Being non-autoregressive, this Transformer model is:

  • Robust: No repeats and failed attention modes for challenging sentences.
  • Fast: With no autoregression, predictions take a fraction of the time.
  • Controllable: It is possible to control the speed of the generated utterance.

πŸ”ˆ Samples

Can be found here.

These samples' spectrograms are converted using the pre-trained WaveRNN and MelGAN vocoders.

Try it out on Colab:

Version Colab Link
Forward + MelGAN Open In Colab
Forward + WaveRNN Open In Colab
Autoregressive + MelGAN Open In Colab
Autoregressive + WaveRNN Open In Colab

Updates

  • 4/06/20: Added normalisation and pre-trained models compatible with the faster MelGAN vocoder.

πŸ“– Contents

Installation

Make sure you have:

  • Python >= 3.6

Install espeak as phonemizer backend (for macOS use brew):

sudo apt-get install espeak

Then install the rest with pip:

pip install -r requirements.txt

Read the individual scripts for more command line arguments.

Dataset

You can directly use LJSpeech to create the training dataset.

Configuration

  • If training on LJSpeech, or if unsure, simply use one of
    • config/wavernn to create models compatible with WaveRNN
    • config/melgan for models compatible with MelGAN
  • EDIT PATHS: in data_config.yaml edit the paths to point at your dataset and log folders

Custom dataset

Prepare a dataset in the following format:

|- dataset_folder/
|   |- metadata.csv
|   |- wav/
|       |- file1.wav
|       |- ...

where metadata.csv has the following format: wav_file_name|transcription

Training

Change the --config argument based on the configuration of your choice.

Train Autoregressive Model

Create training dataset

python create_dataset.py --config config/melgan

Training

python train_autoregressive.py --config config/melgan

Train Forward Model

Compute alignment dataset

First use the autoregressive model to create the durations dataset

python extract_durations.py --config config/melgan --binary --fix_jumps --fill_mode_next

this will add an additional folder to the dataset folder containing the new datasets for validation and training of the forward model.
If the rhythm of the trained model is off, play around with the flags of this script to fix the durations.

Training

python train_forward.py --config /path/to/config_folder/

Training & Model configuration

  • Training and model settings can be configured in model_config.yaml

Resume or restart training

  • To resume training simply use the same configuration files AND --session_name flag, if any
  • To restart training, delete the weights and/or the logs from the logs folder with the training flag --reset_dir (both) or --reset_logs, --reset_weights

Monitor training

We log some information that can be visualized with TensorBoard:

tensorboard --logdir /logs/directory/

Tensorboard Demo

Prediction

Predict with either the Forward or Autoregressive model

from utils.config_manager import ConfigManager
from utils.audio import Audio

config_loader = ConfigManager('/path/to/config/', model_kind='forward')
audio = Audio(config_loader.config)
model = config_loader.load_model()
out = model.predict('Please, say something.')

# Convert spectrogram to wav (with griffin lim)
wav = audio.reconstruct_waveform(out['mel'].numpy().T)

Model Weights

Model URL Commit Vocoder Commit
ljspeech_melgan_forward_model 1c1cb03 aca5990
ljspeech_melgan_autoregressive_model_v2 1c1cb03 aca5990
ljspeech_wavernn_forward_model 1c1cb03 3595219
ljspeech_wavernn_autoregressive_model_v2 1c1cb03 3595219
ljspeech_wavernn_forward_model d9ccee6 3595219
ljspeech_wavernn_autoregressive_model_v2 d9ccee6 3595219
ljspeech_wavernn_autoregressive_model_v1 2f3a1b5 3595219

Maintainers

Special thanks

MelGAN and WaveRNN: data normalization and samples' vocoders are from these repos.

Erogol and the Mozilla TTS team for the lively exchange on the topic.

Copyright

See LICENSE for details.

About

πŸ€–πŸ’¬ Implementation of a non-autoregressive Transformer based neural network for text to speech.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%