Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have two identical GPUs in my machine but when fine tune it seems only one of them is used #3 #4967

Closed
abner2015 opened this issue Nov 11, 2016 · 3 comments

Comments

@abner2015
Copy link

I have two identical GPUs in my machine but when fine tune resnet56 it seems only one of them is used.h
i had try the two cmd
$caffe train -solver solver.prototxt -weight restnet56.caffemodel -gpu all
$caffe train -solver solver.prototxt -weight restnet56.caffemodel -gpu 0,1
watch the cmd
$nvidia-smi
it only show gpu 0 use 89% memory ,and the 1 is 0%
at last, get the error: check failed:error == cudasuccess(2 vs. 0) out of memory
thanks for u help!

@Luonic
Copy link

Luonic commented Nov 28, 2016

check failed:error == cudasuccess(2 vs. 0) out of memory
Reduce batch size

@abner2015
Copy link
Author

when the batch size is 1, but it did not work. only one gpu working

@shelhamer
Copy link
Member

Testing currently only makes use of a single GPU while training can parallelize over multiple GPUs. See the new parallelism in #4563.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants