You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
you said "Do NOT uncomment USE_CUDNN := 1 (for running PVANET, cuDNN is slower than Caffe native implementation)", I want to know, the slower is include training and test? I train my net without cudnn, its so slow
The text was updated successfully, but these errors were encountered:
I'm sorry we made a confusion here.
It's totally okay to uncomment USE_CUDNN if it makes your training faster.
It just means that commenting 'USE_CUDNN' worked faster in our computational environments.
I'll update README.
FYI, for our published results,
training of a network took 7~14 days with Titan X or GTX1080.
I think it is possible to also make testing faster with cuDNN, although I have not got a chance to test it on PVANET. The performance issue is that cuDNN had bad implementations for certain convolutions (e.g. 1x1 convolutions with stride=1, which is used a couple of times in PVANET). You can still compile Caffe with cuDNN and put engine: CAFFE under convolution_param in these layers.
you said "Do NOT uncomment
USE_CUDNN := 1
(for running PVANET, cuDNN is slower than Caffe native implementation)", I want to know, the slower is include training and test? I train my net without cudnn, its so slowThe text was updated successfully, but these errors were encountered: