Skip to content
PyTorch implementation of convolutional networks-based text-to-speech synthesis models
Branch: master
Clone or download
Pull request Compare This branch is 21 commits ahead, 73 commits behind r9y9:master.
Latest commit 9bf7c42 Apr 11, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
deepvoice3_pytorch Minor fix. Mar 2, 2018
docs Fix wrong REAME Jan 2, 2018
eki_preprocess EVA and radio news combined. Mar 12, 2018
er_preprocess ER new multispeaker. Feb 27, 2018
nikl_preprocess Adding Korean read speech corpus (r9y9#44) Feb 20, 2018
static Improved version of demo website. Mar 6, 2018
templates Minor changes for the public website. Mar 7, 2018
tests Fix inconsistency between amp_to_db and db_to_amp Feb 24, 2018
vctk_preprocess Support for multi-speaker (r9y9#10) Dec 21, 2017
.gitignore add a few ignores Jan 27, 2018
.gitmodules Remove dependency Nov 7, 2017
.travis.yml Add Travis CI (r9y9#13) Dec 22, 2017
LICENSE.md formatting fix Dec 27, 2017
MANIFEST.in Prepare for pypi release Dec 27, 2017
README.md Adding Korean read speech corpus (r9y9#44) Feb 20, 2018
audio.py min level Feb 26, 2018
compute_timestamp_ratio.py WIP: Second refactor towards multi-speaker support (r9y9#6) Dec 13, 2017
demo_server.py Minor changes for the public website. Mar 7, 2018
environment.yml Updated environment.yml. Apr 11, 2018
erm.py Preparations for lowercase training. Mar 2, 2018
ers.py Preparations for lowercase training. Mar 2, 2018
eva.py Preparations for lowercase training. Mar 2, 2018
eval_estonian_er.txt Additional eval file for Estonian. Feb 27, 2018
eval_estonian_eva.txt Cleanup. Mar 2, 2018
evas.py Preparations for lowercase training. Mar 2, 2018
hparams.py EVA and radio news combined. Mar 12, 2018
jsut.py add rescaling option Jan 27, 2018
ljspeech.py Fix mel-spectrogram extraction Jan 31, 2018
lrschedule.py Major refactor (r9y9#3) Nov 23, 2017
neurokone.conf Configuration for Apache server. Mar 7, 2018
nikl_m.py Adding Korean read speech corpus (r9y9#44) Feb 20, 2018
nikl_s.py Adding Korean read speech corpus (r9y9#44) Feb 20, 2018
preprocess.py Added Evas dataset. Mar 1, 2018
release.sh Prepare for pypi release Dec 27, 2017
server.wsgi Configuration for Apache server. Mar 7, 2018
setup.py Start new dev cycle Feb 6, 2018
synthesis.py Support for multi-speaker (r9y9#10) Dec 21, 2017
synthesizer.py Improved version of demo website. Mar 6, 2018
train.py ER news multispeaker. Feb 27, 2018
vctk.py add rescaling option Jan 27, 2018

README.md

Deepvoice3_pytorch

PyPI Build Status

PyTorch implementation of convolutional networks-based text-to-speech synthesis models:

  1. arXiv:1710.07654: Deep Voice 3: 2000-Speaker Neural Text-to-Speech.
  2. arXiv:1710.08969: Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention.

Audio samples are available at https://r9y9.github.io/deepvoice3_pytorch/.

Highlights

  • Convolutional sequence-to-sequence model with attention for text-to-speech synthesis
  • Multi-speaker and single speaker versions of DeepVoice3
  • Audio samples and pre-trained models
  • Preprocessor for LJSpeech (en), JSUT (jp) and VCTK datasets
  • Language-dependent frontend text processor for English and Japanese

Samples

Pretrained models

URL Model Data Hyper paramters Git commit Steps
link DeepVoice3 LJSpeech builder=deepvoice3,preset=deepvoice3_ljspeech 4357976 210k ~
link Nyanko LJSpeech builder=nyanko,preset=nyanko_ljspeech ba59dc7 585k
link Multi-speaker DeepVoice3 VCTK builder=deepvoice3_multispeaker,preset=deepvoice3_vctk 0421749 300k + 300k

See "Synthesize from a checkpoint" section in the README for how to generate speech samples. Please make sure that you are on the specific git commit noted above.

Notes on hyper parameters

  • Default hyper parameters, used during preprocessing/training/synthesis stages, are turned for English TTS using LJSpeech dataset. You will have to change some of parameters if you want to try other datasets. See hparams.py for details.
  • builder specifies which model you want to use. deepvoice3, deepvoice3_multispeaker [1] and nyanko [2] are surpprted.
  • presets represents hyper parameters known to work well for particular dataset/model from my experiments. Before you try to find your best parameters, I would recommend you to try those presets by setting preset=${name}. e.g., for LJSpeech, you can try either
python train.py --data-root=./data/ljspeech --checkpoint-dir=checkpoints_deepvoice3 \
    --hparams="builder=deepvoice3,preset=deepvoice3_ljspeech" \
    --log-event-path=log/deepvoice3_preset

or

python train.py --data-root=./data/ljspeech --checkpoint-dir=checkpoints_nyanko \
    --hparams="builder=nyanko,preset=nyanko_ljspeech" \
    --log-event-path=log/nyanko_preset
  • Hyper parameters described in DeepVoice3 paper for single speaker didn't work for LJSpeech dataset, so I changed a few things. Add dilated convolution, more channels, more layers and add guided attention loss, etc. See code for details. The changes are also applied for multi-speaker model.
  • Multiple attention layers are hard to learn. Empirically, one or two (first and last) attention layers seems enough.
  • With guided attention (see https://arxiv.org/abs/1710.08969), alignments get monotonic more quickly and reliably if we use multiple attention layers. With guided attention, I can confirm five attention layers get monotonic, though I cannot get speech quality improvements.
  • Binary divergence (described in https://arxiv.org/abs/1710.08969) seems stabilizes training particularly for deep (> 10 layers) networks.
  • Adam with step lr decay works. However, for deeper networks, I find Adam + noam's lr scheduler is more stable.

Requirements

Installation

Please install packages listed above first, and then

git clone https://github.com/r9y9/deepvoice3_pytorch
cd deepvoice3_pytorch
pip install -e ".[train]"

If you want Japanese text processing frontend, install additional dependencies by:

pip install -e ".[jp]"

Getting started

0. Download dataset

1. Preprocessing

Preprocessing can be done by preprocess.py. Usage is:

python preprocess.py ${dataset_name} ${dataset_path} ${out_dir}

Supported ${dataset_name}s for now are

  • ljspeech (en, single speaker)
  • vctk (en, multi-speaker)
  • jsut (jp, single speaker)
  • nikl_m (ko, multi-speaker)
  • nikl_s (ko, single speaker)

Suppose you will want to preprocess LJSpeech dataset and have it in ~/data/LJSpeech-1.0, then you can preprocess data by:

python preprocess.py ljspeech ~/data/LJSpeech-1.0/ ./data/ljspeech

When this is done, you will see extracted features (mel-spectrograms and linear spectrograms) in ./data/ljspeech.

2. Training

Basic usage of train.py is:

python train.py --data-root=${data-root} --hparams="parameters you want to override"

Suppose you will want to build a DeepVoice3-style model using LJSpeech dataset with default hyper parameters, then you can train your model by:

python train.py --data-root=./data/ljspeech/ --hparams="builder=deepvoice3,preset=deepvoice3_ljspeech"

Model checkpoints (.pth) and alignments (.png) are saved in ./checkpoints directory per 5000 steps by default.

If you are building a Japaneses TTS model, then for example,

python train.py --data-root=./data/jsut --hparams="frontend=jp" --hparams="builder=deepvoice3,preset=deepvoice3_ljspeech"

frontend=jp tell the training script to use Japanese text processing frontend. Default is en and uses English text processing frontend.

Note that there are many hyper parameters and design choices. Some are configurable by hparams.py and some are hardcoded in the source (e.g., dilation factor for each convolution layer). If you find better hyper parameters, please let me know!

NIKL

Pleae check this in advance and follow the commands below.

python preprocess.py nikl_s ${your_nikl_root_path} data/nikl_s

python train.py --data-root=./data/nikl_s --checkpoint-dir checkpoint_nikl_s \
  --hparams="frontend=ko,builder=deepvoice3,preset=deepvoice3_nikls"

4. Monitor with Tensorboard

Logs are dumped in ./log directory by default. You can monitor logs by tensorboard:

tensorboard --logdir=log

5. Synthesize from a checkpoint

Given a list of text, synthesis.py synthesize audio signals from trained model. Usage is:

python synthesis.py ${checkpoint_path} ${text_list.txt} ${output_dir}

Example test_list.txt:

Generative adversarial network or variational auto-encoder.
Once upon a time there was a dear little girl who was loved by every one who looked at her, but most of all by her grandmother, and there was nothing that she would not have given to the child.
A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module.

Note that you have to use the same hyper parameters used for training. For example, if you are using hyper parameters preset=deepvoice3_ljspeech,builder=deepvoice3" for training, then synthesis command should be:

python synthesis.py --hparams="builder=deepvoice3,preset=deepvoice3_ljspeech" ${checkpoint_path} ${text_list.txt} ${output_dir}

Advanced usage

Multi-speaker model

VCTK and NIKL are supported dataset for building a multi-speaker model.

VCTK

Since some audio samples in VCTK have long silences that affect performance, it's recommended to do phoneme alignment and remove silences according to vctk_preprocess.

Once you have phoneme alignment for each utterance, you can extract features by:

python preprocess.py vctk ${your_vctk_root_path} ./data/vctk

Now that you have data prepared, then you can train a multi-speaker version of DeepVoice3 by:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
   --hparams="preset=deepvoice3_vctk,builder=deepvoice3_multispeaker" \
   --log-event-path=log/deepvoice3_multispeaker_vctk_preset

If you want to reuse learned embedding from other dataset, then you can do this instead by:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
   --hparams="preset=deepvoice3_vctk,builder=deepvoice3_multispeaker" \
   --log-event-path=log/deepvoice3_multispeaker_vctk_preset \
   --load-embedding=20171213_deepvoice3_checkpoint_step000210000.pth

This may improve training speed a bit.

NIKL

You will be able to obtain cleaned-up audio samples in ../nikl_preprocoess. Details are found in here.

Once NIKL corpus is ready to use from the preprocessing, you can extract features by:

python preprocess.py nikl_m ${your_nikl_root_path} data/nikl_m

Now that you have data prepared, then you can train a multi-speaker version of DeepVoice3 by:

python train.py --data-root=./data/nikl_m  --checkpoint-dir checkpoint_nikl_m \
   --hparams="frontend=ko,builder=deepvoice3,preset=deepvoice3_niklm,builder=deepvoice3_multispeaker"

Speaker adaptation

If you have very limited data, then you can consider to try fine-turn pre-trained model. For example, using pre-trained model on LJSpeech, you can adapt it to data from VCTK speaker p225 (30 mins) by the following command:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk_adaptation \
    --hparams="builder=deepvoice3,preset=deepvoice3_ljspeech" \
    --log-event-path=log/deepvoice3_vctk_adaptation \
    --restore-parts="20171213_deepvoice3_checkpoint_step000210000.pth"
    --speaker-id=0

From my experience, it can get reasonable speech quality very quickly rather than training the model from scratch.

There are two important options used above:

  • --restore-parts=<N>: It specifies where to load model parameters. The differences from the option --checkpoint=<N> are 1) --restore-parts=<N> ignores all invalid parameters, while --checkpoint=<N> doesn't. 2) --restore-parts=<N> tell trainer to start from 0-step, while --checkpoint=<N> tell trainer to continue from last step. --checkpoint=<N> should be ok if you are using exactly same model and continue to train, but it would be useful if you want to customize your model architecture and take advantages of pre-trained model.
  • --speaker-id=<N>: It specifies what speaker of data is used for training. This should only be specified if you are using multi-speaker dataset. As for VCTK, speaker id is automatically assigned incrementally (0, 1, ..., 107) according to the speaker_info.txt in the dataset.

Acknowledgements

Part of code was adapted from the following projects:

You can’t perform that action at this time.