Implementation of a non-autoregressive Transformer based neural network for Text-to-Speech (TTS).
This repo is based on the following papers:
- Neural Speech Synthesis with Transformer Network
- FastSpeech: Fast, Robust and Controllable Text to Speech
Our pre-trained LJSpeech models are compatible with the pre-trained vocoders from:
Being non-autoregressive, this Transformer model is:
- Robust: No repeats and failed attention modes for challenging sentences.
- Fast: With no autoregression, predictions take a fraction of the time.
- Controllable: It is possible to control the speed of the generated utterance.
These samples' spectrograms are converted using the pre-trained WaveRNN and MelGAN vocoders.
Try it out on Colab:
Version | Colab Link |
---|---|
Forward + MelGAN | |
Forward + WaveRNN | |
Autoregressive + MelGAN | |
Autoregressive + WaveRNN |
- 4/06/20: Added normalisation and pre-trained models compatible with the faster MelGAN vocoder.
Make sure you have:
- Python >= 3.6
Install espeak as phonemizer backend (for macOS use brew):
sudo apt-get install espeak
Then install the rest with pip:
pip install -r requirements.txt
Read the individual scripts for more command line arguments.
You can directly use LJSpeech to create the training dataset.
- If training on LJSpeech, or if unsure, simply use one of
- EDIT PATHS: in
data_config.yaml
edit the paths to point at your dataset and log folders
Prepare a dataset in the following format:
|- dataset_folder/
| |- metadata.csv
| |- wav/
| |- file1.wav
| |- ...
where metadata.csv
has the following format:
wav_file_name|transcription
Change the --config
argument based on the configuration of your choice.
python create_dataset.py --config config/melgan
python train_autoregressive.py --config config/melgan
First use the autoregressive model to create the durations dataset
python extract_durations.py --config config/melgan --binary --fix_jumps --fill_mode_next
this will add an additional folder to the dataset folder containing the new datasets for validation and training of the forward model.
If the rhythm of the trained model is off, play around with the flags of this script to fix the durations.
python train_forward.py --config /path/to/config_folder/
- Training and model settings can be configured in
model_config.yaml
- To resume training simply use the same configuration files AND
--session_name
flag, if any - To restart training, delete the weights and/or the logs from the logs folder with the training flag
--reset_dir
(both) or--reset_logs
,--reset_weights
We log some information that can be visualized with TensorBoard:
tensorboard --logdir /logs/directory/
Predict with either the Forward or Autoregressive model
from utils.config_manager import ConfigManager
from utils.audio import Audio
config_loader = ConfigManager('/path/to/config/', model_kind='forward')
audio = Audio(config_loader.config)
model = config_loader.load_model()
out = model.predict('Please, say something.')
# Convert spectrogram to wav (with griffin lim)
wav = audio.reconstruct_waveform(out['mel'].numpy().T)
Model URL | Commit | Vocoder Commit |
---|---|---|
ljspeech_melgan_forward_model | 1c1cb03 | aca5990 |
ljspeech_melgan_autoregressive_model_v2 | 1c1cb03 | aca5990 |
ljspeech_wavernn_forward_model | 1c1cb03 | 3595219 |
ljspeech_wavernn_autoregressive_model_v2 | 1c1cb03 | 3595219 |
ljspeech_wavernn_forward_model | d9ccee6 | 3595219 |
ljspeech_wavernn_autoregressive_model_v2 | d9ccee6 | 3595219 |
ljspeech_wavernn_autoregressive_model_v1 | 2f3a1b5 | 3595219 |
- Francesco Cardinale, github: cfrancesco
MelGAN and WaveRNN: data normalization and samples' vocoders are from these repos.
Erogol and the Mozilla TTS team for the lively exchange on the topic.
See LICENSE for details.