-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A5000 batchsize=1 CUDAoom #16
Comments
您好,可以贴一下报错的截图吗,batchsize为1应该不会爆显存,还有可以贴一下https://github.com/weijiawu/TransDETR/blob/main/configs/r50_TransDETR_train_DSText.sh这个文件的截图吗 我猜测可能你没改对bach size |
设置为4 4 4之后,显存占用大约在16G,可以成功运行。 |
您好,我看您在关注作者的工作,并且已经在一张A5000上完成对这个项目的复现。我最近也在关注作者的这项工作,但是对于复现和结果上有一点疑问,可以通过邮箱跟您请教一点相关的问题吗?如果方便的话这是我的邮箱:ptang@shu.edu.cn。期待您的回复,万分感谢! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
跑DSText的baseline的时候,设置batchsize=1,在A5000单卡上出现CUDAoom
按照readme里面的描述 8张32G的V100能跑batchsize=16 那么24G单卡跑batchsize=1应该是不会出现这种情况的才对、
请问这种情况下还要调整哪些参数来降低显存占用
The text was updated successfully, but these errors were encountered: