-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrating Tacotron and LPCNet: Training tacotron with .f32 features #4
Comments
|
So while training tacatron2 , i should replace/softlink the f32 to audio folder of training_data( after preprocessing) and train.txt first column(meta[0]) should be actual name of f32 files right?
Basically what npy should be loaded in feeder audio or melspectrum .? |
of course, you can do that as long as your path is the f32.file . |
Hi, the feature extracted with feature_extract.sh script is saved as .f32 file and then it's used to train Tacotron2 . But Normally Tacotron2 was used to predict mel spectrogram. Here, T2 + LPCNet ,is the predict target of T2 is changed or just replace mel spectrogram with .f32 feature? |
@superhg2012 replace the f32 created to audio folder.. |
@alokprasad thanks a lot !! # |
@alokprasad can you post your samples? |
@superhg2012 |
@alokprasad I can not reach the link you posted, please refer to [#1], (#1) posted audio sample, it seems that the author did not use GTA training mode. |
@alokprasad You have a lot work to do because you should calculate the length of audio according to the number of frames and add the tail to the audio. We did not use GTA mode because the job is trivial. |
@MlWoo do you mean that each Audio file should be of same length or it should in integral multiple of frames? |
@alokprasad more work. LPCnet will cut off the silence of audio in default, you should modify LPCnet code to cooperate with gta result of T2. |
@MlWoo Can you point to code in LPCNet , where the modification need to be done. |
@MlWoo i saw that xiph@554b6df there is silence removal here, and this is needed only to |
@MlWoo Can we add this in Tacotron training for silence removal |
@alokprasad Tacotron training with silence removal is maybe a good idea when training English. It is bad idea while training Chinese(mandarin) because the very short silence is benefiticial to the prosody. I am not very sure about that is good to English cauz' I am not a native English speaker. Removing the long silence at the beginning and end of an audio is necessary when training Tacotron2. |
I see you save audio (by preprocess) to meta[0], so you use audio as |
Is the way to train T2? I don't understand well. |
I see, we use f32 to train T2 instead mel feature |
I have confuse about the trm
@alokprasad |
@superhg2012 Is the audio quality of TTS + LPCNet good? how did you make it? |
@byuns9334 I don't get good quality with T2 + LPCNet(20dim). But, I get better quality with T1 and LPCNet(55 dim). @lmingde I put the dumped f32 files into the audio dir,when train T2, the f32 files in audio dir will be feeded as mel_target for training. |
@superhg2012 i guess T1 and T2 are same except Vocoder part , which anyways we are using LPCNET. |
@alokprasad about LPCNet model training with 55 dim features is better than 20 dim. About T1, no special changes, just train with 55 dim features. |
nb_features is already 55 , so u mean to say no changes in lpcnet just train lpcnet. For T1 |
yes, just try it. |
@MlWoo Hi, may I know that, in the training stage when feeding a batch of samples to Tacotron, what padding values are used to ensure the f32 features (whatever 20 or 55 dims) having the same length? I noticed that -0.1 is used in alokprasad's LPCTron implementation. |
@wangfn I have forgotten it. no worries, just mask the padding value when calculating the loss. |
@MlWoo Thanks a lot, indeed masking the padding values is the solution. |
@superhg2012 what changes is required for LPCNET for 20 to 55 dim? i thin it uses 55 but only 20 are needed.Any changes in Tactron2 training if we change dims in LPCNET |
Hello all, I trained both models on the LJSpeech dataset and ended up with this alignment: and with these synthesis results I've heard some great results from others so I am wondering where I went wrong. Thanks! |
In the ReadMe, it's mentioned
But, meta[0] will have speech-audio-xxxx.npy files while self._mel_dir would have speech-mel-xxxx.npy files. So, the above code snippet is trying to search for speech (npy or f32) files inside mel_dir. Is there any thing wrong in the above code snippet.
One more doubt: Where should I copy the .f32 file generated in previous step, in Mels or in wavs or in Linear folder so that we can train Tacotron with these features generated.
Also, In this case, should I use
which trains entire tacotron+wavenet
or use
which trains only Tacotron.
Thanks
The text was updated successfully, but these errors were encountered: