We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我想在单机上使用多个GPU来训练,在run.py文件的最开始加上了 import os os.environ["CUDA_VISIBLE_DEVICES"] = "4,5,6,7" 并且注释掉了 np.random.seed(1) torch.manual_seed(1) torch.cuda.manual_seed_all(1) torch.backends.cudnn.deterministic = True 但是还是只使用的GPU:4这一个GPU
这条语句在tensorflow中可以正常运行的,但是在pytroch中为什么会失效了呢? 在线求解~
The text was updated successfully, but these errors were encountered:
解决了。具体步骤参考这篇文章:https://wnwhite.xin/2019/11/11/pytorch_multi_gpu/ ,知乎上的关于多卡运行的,都只说了一半。累屁了~
Sorry, something went wrong.
@lidianxiang 链接打不开了,能给个新链接吗?多谢了
t同求
No branches or pull requests
我想在单机上使用多个GPU来训练,在run.py文件的最开始加上了
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "4,5,6,7"
并且注释掉了
np.random.seed(1)
torch.manual_seed(1)
torch.cuda.manual_seed_all(1)
torch.backends.cudnn.deterministic = True
但是还是只使用的GPU:4这一个GPU
这条语句在tensorflow中可以正常运行的,但是在pytroch中为什么会失效了呢?
在线求解~
The text was updated successfully, but these errors were encountered: