Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is training slower when training two models at the same time? #3664

Open
zenosai opened this issue May 10, 2024 · 2 comments
Open

Why is training slower when training two models at the same time? #3664

zenosai opened this issue May 10, 2024 · 2 comments

Comments

@zenosai
Copy link

zenosai commented May 10, 2024

当我同时用MMSeg训练两个模型的时候,训练速度几乎都成为了原来的1/2。
image

但是与其他不使用MMSeg的模型一起训练时则没有这个问题。
image

我在多个机器上测试均是这样,请问是什么原因导致的?

@AI-Tianlong
Copy link
Contributor

因为,你是在同一个卡上训练的嘛?
GPU的性能是有限的吧,两个任务同时训,就是会把另一个变慢,因为计算资源有限。即便显存没满,但计算单元是有限的

@zenosai
Copy link
Author

zenosai commented Jul 24, 2024

是不同的卡,我试过用MMSeg就会这样,而我同时跑MMSeg和其他的框架代码就不会变慢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants