Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extend with Numba? #43

Open
raghavchhetri opened this issue Sep 9, 2020 · 2 comments
Open

extend with Numba? #43

raghavchhetri opened this issue Sep 9, 2020 · 2 comments

Comments

@raghavchhetri
Copy link

raghavchhetri commented Sep 9, 2020

First of all, thank you for providing this wonderful toolbox. I only recently started exploring it and I like it quite a bit!

I was wondering if there is any consideration to using Numba which uses LLVM for JIT compilation of the python code to machine code? Since Numba also supports JIT compilation of code to run on GPUs, significant speed-ups might be possible. For instance, I've noticed that the Steps function takes a really long time at the moment (Forward too, but not as much as Steps). Would love to hear the future roadmap of this project, and I look forward to being a regular user. Thanks again for providing this user-friendly toolbox in Python.

@FredvanGoor
Copy link
Member

Sounds interesting, but I tried it and encountered a lot of errors. So it it is not easy.
Before version 2.0 we wrote the package in C++ which should be the fastest. The pure-python versions turned out to be as fast as the C++ version thanks to numpy, so can we achieve a better performance using Numba?
You, or anybody, could try to modify the code using Numba and report the results to me.
Thanks for the idea anyway!

Fred.

@ldoyle
Copy link
Contributor

ldoyle commented Sep 11, 2020

Indeed I made some quick tests with Numba while porting the code from Cpp to Python. However for the few examples that I tested, it was a little complicated (e.g. you cannot allocate new arrays inside a @jit function, so you have to refactor some code) and gave no or no significant speedup. I might have been using it wrong or at the wrong places where not much is to be gained.

If you can make it work as a proof of principle with e.g. Forward, Forvard or Steps, that would definitely be cool.

On the long run, it would be interesting to see if it can be done, however I would suggest to keep numba dependency optional, since the only strict dependencies so far are numpy and scipy, both of which should be widely available on any Python platform (e.g. Raspi). pyfftw is also faster than numpy.fft, but optional since it's hardly any code change.

Regarding Forward, as mentioned in the manual, this becomes impractically slow very quickly, but Steps could be optimized more I'm pretty sure. For most methods I compared the performance of the Cpp version (up to LightPipes 1.2) and the Python/numpy version, however for Steps I might have never done it, I do think it is a little slower than the Cpp counterpart.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants