-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Additional GPU memory usage in the first GPU #114
Comments
Hello, This is definitely not normal behavior and I am investigating a report from someone else. Are you sure you have |
@chengxuz Can you show me the output of nvidia-smi while this is running ? |
Here is the output from nvidia-smi. I have just confirmed that I have run
|
This is definitely not normal. I can reproduce right now with 2 GPUs. However, for me it's GPU1 that has two processes associated to it It shouldn't take long to fix now that I can reproduce. Thank you for confirming what I suspected! |
Hello! Thanks for the report. It should land in v0.0.4. I might deploy a release candidate tonight. You can otherwise install directly from github (branch |
When training one network on multiple GPUs, I find the first GPU is going to have some memory used by processes run on other GPUs. Is there some way to avoid this? It is an issue as then the first GPU always will have more memory used than the other GPUs, meaning that the other GPUs need to have memory unused to let that possible.
The text was updated successfully, but these errors were encountered: