Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

训练是loss很不稳定,字错率下降慢 #35

Closed
panliang5020 opened this issue Feb 15, 2022 · 14 comments
Closed

训练是loss很不稳定,字错率下降慢 #35

panliang5020 opened this issue Feb 15, 2022 · 14 comments

Comments

@panliang5020
Copy link

目前模型已经训练了14个epoch,训练时loss频繁变化,最低为1,最高可到40多,字错率从初始0.57下降到0.49,不知道这是否是正常现象
image

@yeyupiaoling
Copy link
Owner

这应该是正常的,你训练那个数据集?

@panliang5020
Copy link
Author

这应该是正常的,你训练那个数据集?

WenetSpeech数据集与前三个开源小型数据集

@yeyupiaoling
Copy link
Owner

字错率一开始就是0.57 ?batch size是多少?

@panliang5020
Copy link
Author

字错率一开始就是0.57 ?batch size是多少?

image
batch size 32 使用多卡训练

@yeyupiaoling
Copy link
Owner

是正常的,一般是在20多轮的时候有较大的下降。你也可以用VisualDL查看训练log的变化,有loss有没有在减小的趋势。要看整体趋势,不能只看某个值,因为音频的长度不一。

@yeyupiaoling
Copy link
Owner

训练时间有点久,训练时,GPU占用大吗?

@panliang5020
Copy link
Author

训练时间有点久,训练时,GPU占用大吗?

两块卡一共占了27g多,训练速度属实有点慢了,一个epoch37,8小时

@yeyupiaoling
Copy link
Owner

我问得是这个占用率。因为通常情况下,数据读取和预处理会导致GPU的占用率不够,WenetSpeech的数量很多,如果是机械硬盘,会导致读取速度变慢。

这个要注意。

image

@panliang5020
Copy link
Author

image

@yeyupiaoling
Copy link
Owner

这就没问题了。等训练20多轮的时候变化情况

@panliang5020
Copy link
Author

这就没问题了。等训练20多轮的时候变化情况

好的,感谢解惑

@yeyupiaoling
Copy link
Owner

有变化吗?

@panliang5020
Copy link
Author

有变化吗?
image

依旧表现不太好,字错率没有下降

@yeyupiaoling
Copy link
Owner

对了,现在提供了更大的模型deepspeech2_big,你可以试试这个模型训练。

PPASR/train.py

Line 15 in 5141986

add_arg('use_model', str, 'deepspeech2', '所使用的模型', choices=['deepspeech2', 'deepspeech2_big'])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants