-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incompatible with current cudnn 8.0.3 ? #6970
Comments
Apparently so. I have a project study hinging on its use but the repo is stale. Windows works btw, CUDA 11, Cudnn 8.0.3.33 |
I've got the same problem with Caffe and cuDNN version 8. As of version 8, NVIDIA has dropped the cudnnGetConvolutionBackwardFilterAlgorithm. Because there is no replacement for the cudnnGetConvolutionBackwardFilterAlgorithm I've followed the strategy of the PaddlePaddle framework, by giving the outcome a constant CUDNN_CONVOLUTION_BWD_FILTER_ALGO_1 value and twice the memory earlier found with the cudnnGetConvolutionForwardAlgorithm. I could request a merge in this repo, but not quite sure if the solution will work at all times, I decided to put it in our own GitHub repo first. If it turns out that it works fine, I will merge. For now, please use this repo. |
I have followed their tutorial and used their repo, but I'm having this issue CXX src/caffe/layers/softmax_layer.cpp |
First guess, you are missing a brace somewhere. The first error only occurs when a template declaration appears within a function. For instance, there is no closing brace before the declaration starts. The latest 'mistakes' point in the same direction. The expected brace is missing here. Best to download the repo again. |
Hi, @Qengineering, I am having the exact same issue as @astropiu. I followed your instructions and also cloned the latest version of your repo. It is very strange, I inspected cudnn_conv_layer.cpp myself, and the braces seem to be fine. I'm wondering if we should continue this discussion here, or perhaps open a new issue on your repo.
|
@mgomez0 You are more than welcome on my repo. I will review the code now and get back to you asap. |
Solved the problem.
should be
|
@Qengineering Thanks for your caffe patch! I have applied it, but sometimes I observed strange behavior, for some models memory usage is about twice larger comparing to CUDA10-cudnn7 environment, has you observed something like this? |
Indeed, in certain situations, the memory consumption is sustainably large than with cuDNN 7. |
I see, thank you! |
@Qengineering Thanks for your answer again! I agree with the backward pass. But as I see forward pass needs more memory too. I have tried a model with single conv layer and ( 20 * 3 * 1280 * 720 ) input, it's "head" of ResNet used for detection task. With cuda10 and cudnn7.6 I observed about 1.7Gb usage for a forward pass, for cuda 11 and cudnn8 ~ 2.6Gb. Maybe this comparison is not fully correct, because different GPUs were used, Titan XP in the first case and 3060 for the second. |
Also in the forward pass, I had to make an educated guess about memory usage, as the cudnnGetConvolutionForwardAlgorithm is also missing in cuDNN 8. (see line 141 src/caffe/layers/cudnn_conv_layer.cpp) |
Trying to build caffe 1.0.0 but failed against cudnn .
System configuration
Failed with the following ERROR message:
etc. a lot ...
The text was updated successfully, but these errors were encountered: