You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've successfully built and now rebuilt Caffe with the following parameters:
<CpuOnlyBuild>false</CpuOnlyBuild>
<UseCuDNN>true</UseCuDNN>
<CudaVersion>7.5</CudaVersion>
<!-- NOTE: If Python support is enabled, PythonDir (below) needs to be
set to the root of your Python installation. If your Python installation
does not contain debug libraries, debug build will not work. -->
<PythonSupport>true</PythonSupport>
<!-- NOTE: If Matlab support is enabled, MatlabDir (below) needs to be
set to the root of your Matlab installation. -->
<MatlabSupport>true</MatlabSupport>
<CudaDependencies></CudaDependencies>
<!-- Set CUDA architecture suitable for your GPU.
Setting proper architecture is important to mimize your run and compile time. -->
<CudaArchitecture>compute_30,sm_30;compute_35,sm_35;compute_50,sm_50</CudaArchitecture>
<!-- CuDNN 4 and 5 are supported -->
<CuDnnPath></CuDnnPath>
I left empty but I unpacked the CuDNNv4 downloaded zip to the %CUDA_PATH% location (in my case C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5).
I have not done any of the deep_image_analogy build steps:
Edit deep_image_analogy.vcxproj under windows/deep_image_analogy to make the CUDA version in it match yours .
Open solution Caffe and add deep_image_analogy project.
Build project deep_image_analogy.
When executing the provided pre-built executable file:
[libprotobuf WARNING ..\src\google\protobuf\io\coded_stream.cc:537] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING ..\src\google\protobuf\io\coded_stream.cc:78] The total number of bytes read was 574671192
[libprotobuf WARNING ..\src\google\protobuf\io\coded_stream.cc:537] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING ..\src\google\protobuf\io\coded_stream.cc:78] The total number of bytes read was 574671192
F1005 13:11:47.450531 6872 pooling_layer.cu:212] Check failed: error == cudaSuccess (8 vs. 0) invalid device function
*** Check failure stack trace: ***
I'm running CUDA 7.5, cuDNN 4, Visual Studio 2013, MATLAB R2014b and Windows 10 with GeForce GTX 770 GPU.
From what I've read invalid device function indicates a CUDA / GPU incompatibility.
The GeForce GTX 770 GPU has compute capability 3.0 https://developer.nvidia.com/cuda-gpus, so when building Caffe, setting <CudaArchitecture>compute_30,sm_30;compute_35,sm_35;compute_52,sm_52</CudaArchitecture> should be correct, no?
Any info or assistance resolving this would be greatly appreciated.
The text was updated successfully, but these errors were encountered:
Hi,
I've successfully built and now rebuilt Caffe with the following parameters:
I left empty but I unpacked the CuDNNv4 downloaded zip to the %CUDA_PATH% location (in my case C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5).
I have not done any of the
deep_image_analogy
build steps:When executing the provided pre-built executable file:
It results in the following error:
I'm running CUDA 7.5, cuDNN 4, Visual Studio 2013, MATLAB R2014b and Windows 10 with GeForce GTX 770 GPU.
From what I've read
invalid device function
indicates a CUDA / GPU incompatibility.The GeForce GTX 770 GPU has compute capability 3.0 https://developer.nvidia.com/cuda-gpus, so when building Caffe, setting
<CudaArchitecture>compute_30,sm_30;compute_35,sm_35;compute_52,sm_52</CudaArchitecture>
should be correct, no?Any info or assistance resolving this would be greatly appreciated.
The text was updated successfully, but these errors were encountered: