You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I train the network with more steps, and found that the loss increases around about 9k steps, as follows:
Since the VCTK-Corpus has about 44k wave files, and the batch size is 1, the loss increases during the first epoch.
The text was updated successfully, but these errors were encountered:
Not sure why the loss starts increasing in this case, maybe it had something to do with the bug in #29 .
Note that the default hyperparameters are pretty much randomly chosen.
It might be useful to enable batching and use a different learning rate.
I train the network with more steps, and found that the loss increases around about 9k steps, as follows:
![loss](https://cloud.githubusercontent.com/assets/499122/18612143/019dbab2-7d84-11e6-9ce0-865a186b4f24.png)
Since the VCTK-Corpus has about 44k wave files, and the batch size is 1, the loss increases during the first epoch.
The text was updated successfully, but these errors were encountered: