Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training not converged #30

Closed
JesseYang opened this issue Sep 18, 2016 · 1 comment
Closed

Training not converged #30

JesseYang opened this issue Sep 18, 2016 · 1 comment

Comments

@JesseYang
Copy link

I train the network with more steps, and found that the loss increases around about 9k steps, as follows:
loss
Since the VCTK-Corpus has about 44k wave files, and the batch size is 1, the loss increases during the first epoch.

@ibab
Copy link
Owner

ibab commented Sep 18, 2016

Not sure why the loss starts increasing in this case, maybe it had something to do with the bug in #29 .
Note that the default hyperparameters are pretty much randomly chosen.
It might be useful to enable batching and use a different learning rate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants