-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,请教一下训练时长问题 #21
Comments
另外,同一台机器上wavernn我也在同步训练: |
@yannier912 另外,我尝试重新训练,确实感觉很慢, Tesla V100的单GPU只有 3.5 second/step.....我算了一下要训练250k得10天.... |
@yannier912 |
@yannier912 不过我早上已经是按照说明把tacotron_fine_tuning改为True了,然后报了那个错误....心态炸裂。。。:( |
@xuexidi 怎么改的多卡训练呀?求教~ |
@yannier912 这个时候你在终端输入nvidia-smi就会看到有两个GPU被调用起来了哈(0、1号GPU) |
首先我这个是不支持多gpu,我是从Tacotron-2改过来的,原始是支持多gpu。 我当时在我的笔记本上训练的(i5 7300hq, gtx 1060 6g),模型被我调小了(就是现在的模型)。训练200k 步 花了2天多,batch_size为32,1.05 sec / step ,(100k的时候感觉就收敛)。 为了加快训练的速度,可以尝试: wavernn默认是cpu上训练吗?可以改到gpu吗? |
您好,我用标贝公开数据集训练tacotronvV2,在单gpu机器上已经跑了6天了才到120k,当前loss4.8。
是我机器性能问题吗,这也太慢了。请问您训练到收敛大概多长时间呢?
谢谢!!
The text was updated successfully, but these errors were encountered: