Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the training/eval loss #1

Closed
Charlottecuc opened this issue Jun 15, 2022 · 14 comments
Closed

About the training/eval loss #1

Charlottecuc opened this issue Jun 15, 2022 · 14 comments

Comments

@Charlottecuc
Copy link

Hi. Thank you for your great work. Could you also share your loss curve? Thank you very much~

@yl4579
Copy link
Owner

yl4579 commented Jun 15, 2022

I don't have it now, but if you have trained your own model I can tell whether it looks right or not.

@Charlottecuc
Copy link
Author

@yl4579

--- epoch 2 ---
train/loss : 4.4963
train/f0 : 0.1389
train/sil : 4.3574
train/learning_rate: 0.0003
eval/loss : 2.7822
eval/f0 : 2.6451
eval/sil : 0.1372

--- epoch 50 ---
train/loss : 1.2654
train/f0 : 0.1041
train/sil : 1.1613
train/learning_rate: 0.0002
eval/loss : 1.5962
eval/f0 : 1.4615
eval/sil : 0.1347

--- epoch 100 ---
train/loss : 1.0587
train/f0 : 0.0843
train/sil : 0.9744
train/learning_rate: 0.0000
eval/loss : 1.6313
eval/f0 : 1.4785
eval/sil : 0.1528
Screenshot 2022-06-16 at 9 46 51 AM

Thanks~

@Charlottecuc
Copy link
Author

Charlottecuc commented Jun 16, 2022

The training loss looks quite different with the eval loss (especially the sil eval loss). Is it normal? 🤔

@yl4579
Copy link
Owner

yl4579 commented Jun 16, 2022

I think this indicates overfitting. Either you add more training data, add data augmentation, or you should stop training before the evaluation loss starts to increase.

@Charlottecuc
Copy link
Author

Charlottecuc commented Jun 16, 2022

I think this indicates overfitting. Either you add more training data, add data augmentation, or you should stop training before the evaluation loss starts to increase.

How much data did you use to train the model (I used about 60 hours, 100 speakers)? Besides, why the eval f0 loss much higher (~1.4) than the training loss (~0.09)? Thank you very much.

@yl4579
Copy link
Owner

yl4579 commented Jun 16, 2022

I think that was a mistake. I mislabeled the F0 and silence loss for training. I have fixed it.
I believe 60 hours are enough, so you may not need to train for 100 epochs. This number of epochs is for data augmented audios (with noises and reverberations) for voice conversion under more realistic conditions.

@yl4579 yl4579 closed this as completed Jun 19, 2022
@MMMMichaelzhang
Copy link

MMMMichaelzhang commented Jun 19, 2022

Screenshot from 2022-06-19 22-30-27
I use 128 speakers to train including singing and speaking.the sil eval loss doesnt seem normal. @yl4579 @Charlottecuc
Is it because I removed the silence part of all the audio?

@Charlottecuc
Copy link
Author

Screenshot from 2022-06-19 22-30-27 I use 128 speakers to train including singing and speaking.the sil eval loss doesnt seem normal. @yl4579 @Charlottecuc Is it because I removed the silence part of all the audio?

Same loss curve. I used data augmentation (e.g. adding reverberation & noise) and the eval sil loss still starts increasing after epoch 30. Besides, when inferencing with sentences which have lots of plosive/fricative consonants, the predicted F0 values are not as good as that trained by @yl4579 .
Maybe something different in the trick of data augmentation?

@Ruinmou
Copy link

Ruinmou commented Jun 22, 2022

What type of data did you use for training? Can you provide me with a small sample? I used the speech data of more than 200 speakers to train the loss to a minimum of 3. After adding data enhancement (background noise), the loss dropped to 4.4 and it became very slow. @Charlottecuc @MMMMichaelzhang

@MMMMichaelzhang
Copy link

@Charlottecuc
Did you solve the sil loss problem?

@hai8023
I am using mandarin singing and speaking data,My data would make the sil loss go up and it doesn't look right.

@Charlottecuc
Copy link
Author

Charlottecuc commented Jul 1, 2022

@Charlottecuc Did you solve the sil loss problem?

@hai8023 I am using mandarin singing and speaking data,My data would make the sil loss go up and it doesn't look right.

I do not use the default f0 extractor (e.g. dio / harvest) offered by the author, and turn to other algorithms. Then the sil loss looks much better than before.

@skol101
Copy link

skol101 commented Jul 12, 2022

@Charlottecuc Did you solve the sil loss problem?
@hai8023 I am using mandarin singing and speaking data,My data would make the sil loss go up and it doesn't look right.

I do not use the default f0 extractor (e.g. dio / harvest) offered by the author, and turn to other algorithms. Then the sil loss looks much better than before.

@Charlottecuc did you still apply reverb/noises to the 30% of the dataset?

@Charlottecuc
Copy link
Author

@Charlottecuc Did you solve the sil loss problem?
@hai8023 I am using mandarin singing and speaking data,My data would make the sil loss go up and it doesn't look right.

I do not use the default f0 extractor (e.g. dio / harvest) offered by the author, and turn to other algorithms. Then the sil loss looks much better than before.

@Charlottecuc did you still apply reverb/noises to the 30% of the dataset?

Yes. You can add some data aug after maybe 20 epochs.

@skol101
Copy link

skol101 commented Jul 21, 2022

@Charlottecuc Did you solve the sil loss problem?
@hai8023 I am using mandarin singing and speaking data,My data would make the sil loss go up and it doesn't look right.

I do not use the default f0 extractor (e.g. dio / harvest) offered by the author, and turn to other algorithms. Then the sil loss looks much better than before.

Did you use pyin/yin f0 extractor?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants