-
Notifications
You must be signed in to change notification settings - Fork 207
Minor refactoring of specialized kernel size logic for convolution/correlation kernels #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minor refactoring of specialized kernel size logic for convolution/correlation kernels #5
Conversation
i like this. so much cleaner. |
yeah definitely an improvement :-) Is the code tested? Can I just go ahead and merge? |
It compiles, but I'm not sure on how to test it - any advice? My only CUDA-enabled hardware is my laptop's NVIDIA GT 650M. |
You could run the tests here: https://github.com/torch/cunn/tree/master/test |
I'm having issues getting the tests running - it appears to be the interaction with the NVIDIA CUDA libraries and OS X Mavericks. I'll try again tomorrow, but I'll also try spinning up a Linux GPU box. If @soumith or @clementfarabet have the time and inclination, it may be quicker for you to patch this and see if the tests pass on your setup. It's a mechanical change that makes it hard to introduce bugs that don't manifest at compile time, but it's definitely worth checking. |
Tests passed:
|
awesome! thanks. |
Did you want any further testing, etc? |
@andresy @clementfarabet are we waiting on this? |
Minor refactoring of specialized kernel size logic for convolution/correlation kernels
Ok just merged it. I'm going to start using it now. |
Thanks! |
There is a significant amount of code duplicating the logic of
Thus, we can create a macro which encapsulates this logic. I thought this refactoring slightly improves the readability of the code for the heavy duplication. It also allows us to reduce the file size by ~30%.