-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuda error #3
Comments
I consider the problem might be lied on the difference environment between our machine, thus the caffe mex file in the repo might be not compatible in your computer. Could you try to build this caffe version in your computer and put the mex to replace that one in the repo ? |
@zhangliliang I have copy the caffe_.mexa64 build from the caffe you mentioned and replace the one in external/caffe/matlab/caffe_faster_rcnn, howerver, the same error occurs. |
What is your cuda and matlab version? Do you successfully run the test demo? |
@zhangliliang my cuda version is 7.5 and my matlab version is 2013b. I did not run the test demo successfully for I can not load VGG net which requires too much memory. |
Training the RPN would need more memory than testing. It might be more suitable to run the test demo for checking its correctness before running the training code. You might try to use a GPU with larger memory, or try to use CuDNN when compiling the caffe, since it could save lots of memory. |
@zhangliliang I run script_rpn_bf_pedestrian_VGG16_caltech_demo.m, however,the error message is: |
@MisayaZ |
when I run script_rpn_pedestrian_VGG16_caltech.m, I get the error:
I0905 13:10:57.821876 2060 net.cpp:194] relu_proposal1 does not need backward computation.
I0905 13:10:57.821879 2060 net.cpp:194] conv_proposal1 does not need backward computation.
I0905 13:10:57.821884 2060 net.cpp:194] relu5 does not need backward computation.
I0905 13:10:57.821888 2060 net.cpp:194] conv5 does not need backward computation.
I0905 13:10:57.821893 2060 net.cpp:194] relu4 does not need backward computation.
I0905 13:10:57.821899 2060 net.cpp:194] conv4 does not need backward computation.
I0905 13:10:57.821905 2060 net.cpp:194] relu3 does not need backward computation.
I0905 13:10:57.821910 2060 net.cpp:194] conv3 does not need backward computation.
I0905 13:10:57.821916 2060 net.cpp:194] pool2 does not need backward computation.
I0905 13:10:57.821923 2060 net.cpp:194] norm2 does not need backward computation.
I0905 13:10:57.821928 2060 net.cpp:194] relu2 does not need backward computation.
I0905 13:10:57.821933 2060 net.cpp:194] conv2 does not need backward computation.
I0905 13:10:57.821939 2060 net.cpp:194] pool1 does not need backward computation.
I0905 13:10:57.821944 2060 net.cpp:194] norm1 does not need backward computation.
I0905 13:10:57.821950 2060 net.cpp:194] relu1 does not need backward computation.
I0905 13:10:57.821955 2060 net.cpp:194] conv1 does not need backward computation.
I0905 13:10:57.821960 2060 net.cpp:235] This network produces output proposal_bbox_pred
I0905 13:10:57.821965 2060 net.cpp:235] This network produces output proposal_cls_prob
I0905 13:10:57.821980 2060 net.cpp:492] Collecting Learning Rate and Weight Decay.
I0905 13:10:57.821990 2060 net.cpp:247] Network initialization done.
I0905 13:10:57.821995 2060 net.cpp:248] Memory required for data: 21358056
F0905 13:10:57.868927 2060 relu_layer.cu:27] Check failed: error == cudaSuccess (18 vs. 0) invalid texture reference
*** Check failure stack trace: ***
Killed
The text was updated successfully, but these errors were encountered: