You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
------------------ 原始邮件 ------------------
发件人: "baidu/DDParser" ***@***.***>;
发送时间: 2021年10月15日(星期五) 下午5:27
***@***.***>;
***@***.******@***.***>;
主题: Re: [baidu/DDParser] 请教一下OOM问题 (#52)
请问使用的是什么模型? 建议模型选用ernie-lstm或者transformer,batch_size先设置为300试试。
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
用自己造的数据跑模型时,由于部分句子较长,容易出现oom问题,所以我在代码中加了句子长度不超过15的限制(11g显存)才能正常训练。默认的batchsize是2048,我改这个数字发现实际使用的显存没有变化,似乎不起作用。想请教一下,如果不想限制句子长度,应该改哪部分参数或代码来解决oom?
The text was updated successfully, but these errors were encountered: