Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check failed: status == CUDNN_STATUS_SUCCESS (4 vs. 0) CUDNN_STATUS_INTERNAL_ERROR #3

Closed
houwenbo87 opened this issue May 15, 2017 · 18 comments

Comments

@houwenbo87
Copy link

Hi, shicai

I want to use 'caffe time' evaluate network computing time. But encountered the same problem on different GPU.
Check failed: status == CUDNN_STATUS_SUCCESS (4 vs. 0) CUDNN_STATUS_INTERNAL_ERROR.

Do you know how to solve this problem, and have you evaluated the network performance on different GPU?

Thank you!

@shicai
Copy link
Owner

shicai commented May 15, 2017

please uncomment engine: CAFFE used in the conv layers with group.
do not use default CUDNN engine.

@Zehaos
Copy link

Zehaos commented May 15, 2017

I benchmark the inference time on my laptop in CPU mode, time cost almost double as tf. It seems that caffe's implementation(depth wise conv) is not so efficiency.(while backward is faster than tf)

@shicai
Copy link
Owner

shicai commented May 16, 2017

caffe uses group (it is actually a for-loop) to implement channel-wise conv, while tf uses a specialized implementation.

@siddharthm83
Copy link

Do I need to recompile caffe without cudnn for inference?

@shicai
Copy link
Owner

shicai commented May 26, 2017

@siddharthm83 no.

@qingzew
Copy link

qingzew commented Jul 4, 2017

@shicai why mobilenet can not run with cudnn, do you have any idea? in my case, the used memory is increasing, then out of memory,

@wjxiz1992
Copy link

@qingzew this may be a bug for cudnn, not caffe? I'm not sure, but in caffe, if you use engine:CAFFE and gpu mode, mobileNet is not "mobile" any more... I've seen some implementation of depthwise conv on github, you can search "depthwise" to check them.

@qingzew
Copy link

qingzew commented Sep 1, 2017

@wjxiz1992 thank you

@allanpk716
Copy link

@shicai how can i not use cudnn when my caffe compile with cudnn ?

@shicai
Copy link
Owner

shicai commented Sep 27, 2017

please refer to readme.md

@hana9090
Copy link

@shicai
I run faster rcnn and I have this error.
where should I uncomment this engine؟ which file?

@gargvikram07
Copy link

where should I uncomment this engine؟ which file?

@hana9090
Copy link

@gargvikram07 the error disappears when I use sudo before run the python file. When I search about the error many people suggest that you don't have admin privilege to use engine.

@lishaofeng
Copy link

@hana9090 it's work for me when i use sudo ./tools/demo.py

@tolotrasamuel
Copy link

@shicai Could you please explain where should I uncomment this engine? which file?

@shicai
Copy link
Owner

shicai commented Aug 3, 2018

mobilenet_deploy.prototxt
just search engine in the file.

@shenyingying
Copy link

@shicai
good job,thanks for your share,but i have another question,when i use cuda10.01 RTX2080 cudnn7.5 caffe run forward,it throw the question still

I0731 16:55:33.251256 31878 layer_factory.hpp:77] Creating layer inception_3a/pool
I0731 16:55:33.251263 31878 net.cpp:120] Creating Layer inception_3a/pool
I0731 16:55:33.251266 31878 net.cpp:442] inception_3a/pool <- pool2/3x3_s2_pool2/3x3_s2_0_split_3
I0731 16:55:33.251271 31878 net.cpp:416] inception_3a/pool -> inception_3a/pool
F0731 16:55:33.252310 31878 cudnn_pooling_layer.cpp:12] Check failed: status == CUDNN_STATUS_SUCCESS (4 vs. 0) CUDNN_STATUS_INTERNAL_ERROR

but i can't find the engine in model.prototxt.
thanks for your replay

@yuxwind
Copy link

yuxwind commented Feb 28, 2020

Actually, I was out of GPU memory. After killing some application, the error is fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests