New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR : Run demo.py --gpu 0 #51
Comments
@jhj7905 Could you please attach the full error report? |
@Cysu i attached the error report below thank you I0830 11:22:04.826299 8202 net.cpp:1272] blob 169 name det_score diff idx -1 |
@jhj7905 I have checked the cudnn v5.1 manual and there was no Could you please check the output of the following command: ldd caffe/build/install/bin/caffe | grep cudnn Also note that cudnn v5.1 has cuda-7.5 and cuda-8.0 versions. Please make sure the correct version is installed and linked. |
@Cysu there is no such file or direcotry when i used ldd caffe/build/install/bin/caffe | grep cudnn... |
@jhj7905 How did you compile the caffe? Did you follow the cmake commands listed in our README? |
@Cysu Can you show me your cmakelist when you compiled the caffe... |
@Cysu I recompiled the caffe by modifing the cmakelists. |
@jhj7905 I didn't modify the CMakeLists.txt. Usually it is configured through command line parameters, like the one we shown in the README: cmake .. -DUSE_MPI=ON -DCUDNN_INCLUDE=/path/to/cudnn/include -DCUDNN_LIBRARY=/path/to/cudnn/lib64/libcudnn.so Here what I mean is that please make sure the cudnn it linked to is for cuda-8.0, not cuda-7.5. |
@Cysu Do you mean that It is not correct (there is libcudnn.so.5 when i used ldd caffe/build/install/bin/caffe | grep cudnn...)? i installed cuda-8.0, cudnn-5.1...but still same result like below Could you tell me how to solve the problem in details? |
The libcudnn.so.5 normally links to libcudnn.so.5.1.10. Could you please check the file size of the libcudnn.so.5.1.10? It should be 84163560 bytes. If not, the version is probably not correct. |
@Cysu I checked the file size of libcudnn.so.5.1.10... like below |
Alright, that is correct. I wonder if it is due to out of memory? Could you please check the memory consumption with |
@Cysu I checked the memory consumption like below |
cuz..i did not install openmpi... because of python termination |
@jhj7905 It seems that the GPU 0 is almost occupied (9GB / 12GB). You may try to set To install openmpi, please download the source from here, then tar xf openmpi-1.10.7.tar.gz
cd openmpi-1.10.7
./configure --with-cuda=/usr/local/cuda --enable-mpi-thread-multiple
make -j8
sudo make install
cd - This will by default install it to export PATH=/usr/local/bin:$PATH Restart the terminal, remove the |
@Cysu I did it as you told me..set --gpu 1..but still did not work...... |
@Cysu i have a problem..when i build with mpi like below jhj7905@ubuntu:~/person_search-master$ python tools/demo.py |
@jhj7905 Oh, I forgot to mention that you may need to also add the following line to export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH By the way, could you please also verify that |
@jhj7905 My PC is Titan X Haswell, cuda-8, cudnn-v5.1. The reason why we have 8 output units for bboxes instead of 4 is that the original py-faster-rcnn implementation was for general object detection, say there could be 20 object classes + 1 background class. Thus they have in total 21 bboxes, one for each class. We inherit this part of code, so there are 2 bboxes for pedestrian and non-pedestrian. The one for non-pedestrian is just for simplicity and has no effect in practice. |
@Cysu Sincerly thank you for replying my questions. Report bugs to http://www.open-mpi.org/community/help/ above case, when i type 'from mpi4py import MPI', then error occur one more thing. could u tell me how to use the multi-gpu i should run it by using gpu...cuz you have given me a lot of support.. |
@jhj7905 You mean it's fine to run the demo with one GPU now? That's great. If you haven't installed the mpi4py package before, you can install it with pip install mpi4py The demo is not for multi-gpu. We currently only have the evaluation script supporting multi-gpu. Sorry about the inconvenience. |
@Cysu I have a question about building the mpi.. |
@jhj7905 No, I didn't use opencl when building mpi. I used exactly the same commands as I listed above. |
@jhj7905 |
@liuajian How do you make it? -- The C compiler identification is GNU 5.4.0
|
@liuajian @Cysu
|
@Cysu @liuajian When I build caffe without cuDNN library support by setting USE_CUDNN OFF in the CMakeLists.txt, I build caffe successfully. Could you please tell me how to build it with cudnn support? Thanks very much. |
Could you please check if there are any cudnn under your cuda root? For example, |
@Cysu I thought I could build with cudnn v5.1 support just by specifying the path in the cmake command. But it seems that the cudnn files in the cuda root would influencing the building. |
@XinshaoWang Great to know that! |
Spec : cuda8.0 cudnn5.1
Errors occured below
cudnn_cov_lay.cu:33] check failed: status == CUDNN_STATUS_SUCCESS( 5 vs. 0 ) CUDN_STATUS_INVALID_VALUE
how can i run demo.py by gpu
The text was updated successfully, but these errors were encountered: