-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(Single) GPU support for custom Python layers? #5286
Comments
Have you looked at this: Since this indicates that the answer is "not yet", I'm closing the issue. I'm posting both the question and my answer on stackoverflow.com here. I think it is nice to link to emails referenced on caffe-users, so one of the unanswered questions is here: |
I'm not sure if you are asking for availability of PythonLayer-on-GPU. If so, there might be difficulties in implementation. AFAIK, GPU-accelerated procedures cannot have sys-calls. As caffe is a CNN framework, not a library, which calls you instead of being called by you, it will be caffe's responsibility to check sys-calls. So it seems that another Python project (which is able to check whether there are sys-calls in scripts) has to be made. Currently, caffe is using boost to communicate with python scripts and NVCC doesn't support python compilation. |
Sorry for opening this again, I am trying to understand what is actually going on. |
Caffe will support GPU-CPU exchange. There is only a performance hit for
transferring the data between GPU and CPU layers, running on the CPU, etc.
I'm guessing there is an error with your model definition.
Jonathan
…On Fri, Jun 9, 2017 at 10:40 AM, Vassilis Lemonidis < ***@***.***> wrote:
Sorry for opening this again, if anyone is inter
In other words, if I am going to train a network with a Python Layer,
let's say a custom loss one, it will not work with NCCL? What is the
expected behavior? I tried to get the Euclidean Loss Layer, implemented
here
<https://github.com/BVLC/caffe/blob/master/examples/pycaffe/layers/pyloss.py>,
but it seems like, after setting weight_loss:1 in the prototxt, the loss
layer does not work. I tried with a different layer, a Cosine Distance Loss
one, and again there is not the wanted result. The network I tested it on
works with other CPP loss layers. Is there no framework for GPU-CPU
interchange if one of them is not able to process the layer? Do I have to
completely remove the GPU participation from the network, so that I make it
work?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#5286 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AA9e7D6nSY2x1tTtCgfD2TAgzx2TlEiuks5sCQT7gaJpZM4MB-p2>
.
|
This question was asked on caffe-users several times already in the past months, but nobody there posted an answer, so please bear with me.
Is it possible to use even single-GPU with custom Python layers by supplying a CUDA implementation of the forward and backward methods, like in the C++ layers? I've seen issues regarding "multi-GPU support in Python", so I guess it must be possible. If yes, is there any sort of guideline or example on how to "link" the Python layer with the CUDA implementations?
The text was updated successfully, but these errors were encountered: