You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the good job.
I am trying to reproducing the result on LJSpeech dataset. I have two GPU card with atmost 10G GPU memory left each. The training process run to epoch7 and crashed because of "Out of Memory"; I had tried to cut the batchsize to harf: 64, that makes no sense. I did not change the model hyper params to prevent the model degeneration. So what should I do to run it up?
Or is there a list of limitations about resources to run the training process up?
The text was updated successfully, but these errors were encountered:
Thanks for the good job.
I am trying to reproducing the result on LJSpeech dataset. I have two GPU card with atmost 10G GPU memory left each. The training process run to epoch7 and crashed because of "Out of Memory"; I had tried to cut the batchsize to harf: 64, that makes no sense. I did not change the model hyper params to prevent the model degeneration. So what should I do to run it up?
Or is there a list of limitations about resources to run the training process up?
The text was updated successfully, but these errors were encountered: