We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
当我同时用MMSeg训练两个模型的时候,训练速度几乎都成为了原来的1/2。
但是与其他不使用MMSeg的模型一起训练时则没有这个问题。
我在多个机器上测试均是这样,请问是什么原因导致的?
The text was updated successfully, but these errors were encountered:
因为,你是在同一个卡上训练的嘛? GPU的性能是有限的吧,两个任务同时训,就是会把另一个变慢,因为计算资源有限。即便显存没满,但计算单元是有限的
Sorry, something went wrong.
是不同的卡,我试过用MMSeg就会这样,而我同时跑MMSeg和其他的框架代码就不会变慢
No branches or pull requests
当我同时用MMSeg训练两个模型的时候,训练速度几乎都成为了原来的1/2。
但是与其他不使用MMSeg的模型一起训练时则没有这个问题。
我在多个机器上测试均是这样,请问是什么原因导致的?
The text was updated successfully, but these errors were encountered: