-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add supporting code for GPU-based ops #60
Comments
See also the NAR plugin for Maven as well as the SciJava native library loader for general solutions seeking to integrate native libraries with Java. |
@ctrueden thanks for reminding me... @bnorthan I actually wanted to introduce you to the NAR project a little for the purpose of integrating native code into ImageJ plugins. Have a look at https://github.com/imagej/minimal-ij1-plugin/tree/native for example... And feel free to bombard me with questions! |
https://github.com/bobpepin/YacuDecu @bobpepin wrote this and it is licensed under lgpl. I ran some tests on it a while back and it seems to work pretty well. It would be a good starting point for a cuda decon op. @bobpepin wrote wrappers for matlab and imaris and said he'd be happy to see it in imagej eventually. |
@StephanPreibisch As discussed at the hackathon, this issue may be of interest to you as well! |
I have started to write a simple infrastructure for calling native and CUDA code here: https://github.com/fiji/SPIM_Registration/tree/master/src/main/java/spim/process/cuda Two examples of CUDA implementation for separable and non-separable convolution are here, both are very useful for deconvolution: I think it would be great to have some common infrastructure for calling this kind of code. |
Hi, Cheers, On Oct 15, 2014, at 17:19, Stephan Preibisch notifications@github.com wrote:
|
@bobpepin is it publicly visible? Remember: unpublished work never happened, for all practical purposes. |
https://github.com/bobpepin/YacuDecu On Oct 15, 2014, at 17:56, dscho notifications@github.com wrote:
|
https://github.com/bobpepin/YacuDecu @bobpepin wrote a gpu deconvolution and it is licensed under lgpl. I ran some tests on it a while back and it seems to work pretty well. It would be a good starting point for a cuda decon op. Bob wrote wrappers for matlab and imaris and said he'd be happy to see it in imagej eventually. |
Could the license be changed to BSD? Otherwise no problem, but then it will be eternally just an add-on to ImageJ... |
LGPL was to encourage improvements to the library to be incorporated back into the main codebase also used by C/Matlab/Imaris interfaces. What about shipping the DLL/.so or source in a separate subdirectory and have the interface code be part of ImageJ under a BSD license, and contribute eventual changes to the cuda code back to the main yacudecu repository?
|
Also, you might want to consider supporting OpenCL instead of or in addition to CUDA, since it supports nVidia, ATI and Intel cards. The biggest problem there was, last time I looked, that the publicly available FFT implementation had some limits on the input size, 2048 pixels in each dimension or something like that. On Oct 15, 2014, at 19:18, Bob Pepin bobpepin@gmail.com wrote:
|
Just to let you all know there are now quite some OpenCL-based ops (proudly presented by @frauzufall - big thanks to Debo! ): Based on Documentation can be found here: Code examples can be found here: Give them a try and let us know what you think! Cheers, |
Awesome! Thanks @frauzufall for working on this. If we have time, I'd like to show you the next iteration of the SciJava Ops framework while I am visiting. Would it be feasible to name the ops so that they overload existing ops, rather than giving them new names? The idea would be to help people benefit from automatic performance improvements without needing to edit their scripts. |
Hey @ctrueden , that sounds like a great idea. However, before automatically overloading Ops of different implementations, we should dig a bit deeper and find out why some implementations deliver different results. I would also strongly vote for automatic tests ensuring that different implementations deliver similar results up to a given tolerance. This program suggests differences between Ops, CLIJ and ImageJ-legacy of different orders of magnitude: Let's have a chat about it in Dresden :-) Cheers, |
Hi Curtis It would be great if you could show us the next iteration of ops. Correct me if I am wrong but it looks like these new Ops are typed on ClearCLBuffer and ClearCLImage. In fact, at least for the As an aside what is the difference between ClearCLBuffer and ClearCLImage?? There are a few scenarios that I think we need to consider if overloading existing ops.
|
Hey @bnorthan ,
1.-3. If possible, I would like to prevent automatic back-and-forth conversion because conversion takes time. GPU-acceleration is only beneficial, if long workflows are run on the GPU. That's why we initially thought automatic conversion shouldn't be enabled at all... Looking forward to discuss details! :-) |
At the beginning I started to match the CLIJ Ops with existing imagej-ops (here is the code), but there were differences and it is quite some work to find the counterparts (at least for me) so we decided as a first step to write clearly marked CLIJ ops returning the same results as CLIJ does in other scenarios. I also wrote converters. You can try removing the CLIJ_push and CLIJ_pull op calls in the examples (jython, Java). It works in many cases, but sometimes fails to match the ClearCLBuffer to a RAI if the Op has additional input parameters. I stopped going too much into detail / fixing things because I don't want to waste time debugging something that is being rewritten anyways. But the CLIJ Ops are perfect to test some core concepts of imagej-ops. Excited to hear about the next iteration! |
We want to make implementing GPU-based ops as easy as possible. The glue code to execute GPU-based processing from Java is usually the same. The two main flavors to consider supporting are OpenCL and CUDA.
We can start by implementing a couple of GPU-based ops, and then factoring out common code into a shared type hierarchy. Due to the addition of dependencies for working with OpenCL and/or CUDA, we will likely need to create a new
imagej-ops-gpu
project (and/orimagej-ops-cuda
and/orimagej-ops-opencl
projects) which extendimagej-ops
.The text was updated successfully, but these errors were encountered: