-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torch error when testing a model trained with multiple GPUs #736
Comments
gheinrich
added a commit
to gheinrich/DIGITS
that referenced
this issue
May 17, 2016
Datapoints: MNIST+LeNet (30 epochs) 1 GPU: 56s 2 GPUs: 2m51s (not unexpected due to communication overhead) Upscaled CIFAR + Alexnet (10 epochs): 1 GPU: 13m11s 2 GPUs: 13m7s Upscaled CIFAR + Googlenet (2 epochs): 1 GPU: 16m20s 2 GPUs: 11m13s Fix NVIDIA#736
Thanks for the bug report, I have updated the commit on #734 to fix this (with the new programming model we also need to set the number of GPUs when we deserialize a model when doing inference or fine-tuning). |
^ I think you meant #732? |
Whoops. Indeed! |
SlipknotTN
pushed a commit
to cynnyx/DIGITS
that referenced
this issue
Mar 30, 2017
Datapoints: MNIST+LeNet (30 epochs) 1 GPU: 56s 2 GPUs: 2m51s (not unexpected due to communication overhead) Upscaled CIFAR + Alexnet (10 epochs): 1 GPU: 13m11s 2 GPUs: 13m7s Upscaled CIFAR + Googlenet (2 epochs): 1 GPU: 16m20s 2 GPUs: 11m13s Fix NVIDIA#736
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@gheinrich Would #732 fix this?
The text was updated successfully, but these errors were encountered: