-
Notifications
You must be signed in to change notification settings - Fork 332
-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Discussion] finetune过程需要的显存好像很大啊,2080ti 11G显存OOM了 #38
Comments
11G显卡,用来训练还是生成? |
用来在原有的model上继续finetune的,生成的话没问题 |
请问你finetune成功了吗,数据集应该处理成啥样才能finetune呀 |
对 1.5B 的模型 11G 显存应该是不够的,目前只在 TPU v3-8 上做过这个模型的 finetune 实验 |
请问生成一个sample要多久? @ShadowTeamCN |
fintune起码要32g显存吧 我当时用32g 的v100finetun的 bs只能设为1
在2020年06月09日 10:44,rivaldinho123 写道:
11G显卡,用来训练还是生成?
用来在原有的model上继续finetune的,生成的话没问题
请问生成一个sample要多久?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
batchsize已经调整到1了,还是不能finetune,有使用民用级显卡finetune成功的分享一下经验吗
The text was updated successfully, but these errors were encountered: