-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
with 11G memory still "Check failed: error == cudaSuccess (2 vs. 0) out of memory" #1
Comments
Did you build Caffe with cuDNN library? I think without it, Caffe uses much more GPU memory when applying the convolutional layers. That is probably why you run out of GPU memory. Could you check it out? |
@gidariss Yes, I did compile with cudnn. I noticed that after run the first network (rec), it used 6G memory, and when running the second network ,the error showed up. Do I need to free GPU memory after the first network ? How? |
@litingfeng No, you do not need to free GPU memory after the first network. What you can do is in the demo_LocNet_object_detection_pipeline.m script to change the lines 90 and 91 from: I just tried it and I manage to run the demo on a 6Gbyte GPU. Could you tried as well and let me know? Spyros |
@gidariss I gitted a new one, but it still can't work. I even tried 50,100, all run out of memory. When I was running , I checked GPU usage with |
Later today, I tried |
It seems that in demo, you did not |
@litingfeng Regarding the As I said, I did not have any problem running the demo on a 6Gbyte GPU. So it is strangle that in your case it cannot run in 11Gbyte GPU. |
I have a K40 with >11G memory, but when I run
demo_LocNet_object_detection_pipeline
, it reminds meCheck failed: error == cudaSuccess (2 vs. 0) out of memory. I thought 11G is enough because in readme only required 6G. Why is that ?
The text was updated successfully, but these errors were encountered: