Skip to content

πŸ€–πŸ’¬ Implementation of a Transformer based neural network for text to speech.

License

Notifications You must be signed in to change notification settings

datitran/TransformerTTS

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation



A Text-to-Speech Transformer in TensorFlow 2

Implementation of an autoregressive Transformer based neural network for Text-to-Speech (TTS).
This repo is based on the following paper:

Spectrograms produced with LJSpeech and standard data configuration from this repo are compatible with WaveRNN.

πŸ”ˆ Samples

Can be found here.

These samples' spectrograms are converted using the pre-trained WaveRNN vocoder.

The TTS weights used for these samples can be found here.

Check out the notebooks folder for predictions with TransformerTTS and WaveRNN or just try out our Colab notebook:

Open In Colab

πŸ“– Contents

Installation

Make sure you have:

  • Python >= 3.6

Install espeak as phonemizer backend (for macOS use brew):

sudo apt-get install espeak

Then install the rest with pip:

pip install -r requirements.txt

Read the individual scripts for more command line arguments.

Dataset

You can directly use LJSpeech to create the training dataset.

Configuration

  • If training LJSpeech, or if unsure, simply use config/standard
  • EDIT PATHS: in data_config.yaml edit the paths to point at your dataset and log folders

Custom dataset

Prepare a dataset in the following format:

|- dataset_folder/
|   |- metadata.csv
|   |- wav/
|       |- file1.wav
|       |- ...

where metadata.csv has the following format: wav_file_name|transcription

Create training dataset

python create_dataset.py --config config/standard

Training

python train.py --config config/standard

Training & Model configuration

  • Training and model settings can be configured in model_config.yaml

Resume or restart training

  • To resume training simply use the same configuration files AND --session_name flag, if any
  • To restart training, delete the weights and/or the logs from the logs folder with the training flag --reset_dir (both) or --reset_logs, --reset_weights

Monitor training

We log some information that can be visualized with TensorBoard:

tensorboard --logdir /logs/directory/

Prediction

from utils.config_manager import ConfigManager
from utils.audio import reconstruct_waveform

config_loader = ConfigManager('config/standard')
model = config_loader.load_model()
out = model.predict('Please, say something.')

# Convert spectrogram to wav (with griffin lim)
wav = reconstruct_waveform(out['mel'].numpy().T, config=config_loader.config)

Maintainers

Special thanks

WaveRNN: we took the data processing from here and use their vocoder to produce the samples.
Erogol: for the lively exchange on TTS topics.

Copyright

See LICENSE for details.

About

πŸ€–πŸ’¬ Implementation of a Transformer based neural network for text to speech.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%