-
Notifications
You must be signed in to change notification settings - Fork 946
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Moving to theano.tensor.nnet.abstract_conv2d.conv2d #508
Comments
I am wary of making the switch before Theano itself switches this to be the default, to be honest. Is there any info about the performance improvements anywhere? The PR you linked just adds the optimizations to swap in the corrMM implementation, but I don't see anything about performance there. We should probably set up a game plan for this, as it's a bit more complicated than just updating the
We'll also have to transition the 1D layers, and maybe this would be a good time to add a proper 3D convolutional layer as well. |
Ah, I got the performance from this other issue. While it's true that they haven't made it the default yet, the fact that it's been merged should at least inspire some confidence, no? |
I see! I guess OpenMP only affects CPU performance, but still, this would probably be useful to have. We can at least start planning the switch. |
But why should we use
Theano doesn't have a proper conv3d equivalent yet, though. There are several implementations, but no common interface and automatic replacements. The best bet for now is to add a Conv3DDNNLayer and Pool3DDNNLayer. |
Yeah, I guess you're right. It would be nice to have a single unified interface for convolutions. At least |
Theano switched to using the placeholder convolutions by default now (Theano/Theano#3721 and Theano/Theano#3729), so we can close this. |
Excellent! Maybe we should set up a new issue though, to discuss how we plan to revamp our current convolutional layer implementations in response to this. I imagine we can get rid of some of the padding logic, and maybe we can even eliminate |
@benanne: ah, that would be great. Having this at the top of all my networks is cumbersome: Conv2DLayer = lasagne.layers.Conv2DLayer
MaxPool2DLayer = lasagne.layers.MaxPool2DLayer
if theano.config.device.startswith("gpu"):
import lasagne.layers.dnn
# Force GPU implementations if a GPU is available.
if theano.sandbox.cuda.dnn.dnn_available():
Conv2DLayer = lasagne.layers.dnn.Conv2DDNNLayer
MaxPool2DLayer = lasagne.layers.dnn.MaxPool2DDNNLayer |
Yes, I thought so as well, but didn't have the time. Now: #524 @jonhoo: You can get rid of this logic right now when you update to the latest Theano master. Using a |
Theano/Theano#3618 was recently merged, bringing some substantial performance improvements to conv2d. However, the new implementation is only used if
heano.tensor.nnet.abstract_conv2d.conv2d
is used. According to this mailing list post, the default convolution may also to change toabstract_conv2d
soon. Any chance of Lasagne making the change?The text was updated successfully, but these errors were encountered: