I'm trying to train a model for multi-speaker.
When I look at train_data from config_libits.yml, this is using the same training_list.txt used for Ljspeech learning.
There are Libritts data in OOD_texts.txt, but it doesn't seem to affect learning with multi-speaker because only text is extracted.
Do I need to create a new filelist to learn Libri dataset? And edit config_libritts.yml?
Or should I write the given config_libbits.yml?
Thank you.