-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
generic_filter slow on large images #8916
Comments
Cross posting from the scikit-image repo in case others stumble upon this issue. I actually wrote a blog post about exactly this issue a year ago: https://ilovesymposia.com/2017/03/12/scipys-new-lowlevelcallable-is-a-game-changer/ There is also a follow-up: https://ilovesymposia.com/2017/03/15/prettier-lowlevelcallables-with-numba-jit-and-decorators/ I hope you find these useful! The summary: calling a function in Python is expensive but combining LowLevelCallable and Numba (or Cython) you can bypass this expense and get very fast speeds with generic_filter. |
Would you be interested in turning your example to e.g. scipy tutorial pages, @jni? |
@ev-br I would but don't have the bandwidth until after SciPy 2018 at the earliest. If anyone else wants to pick it up, I would be grateful! All content on my blog is CC-BY: |
Just dropping by to say you can use your own functions decorated with There is no need for |
|
|
I am using the generic_filter function to compute an image quality assessment metric, the image contrast, per the following:
While this works beautifully, and scipy is awesome being the only package to provide the ability to attach a function to a filter, it is incredibly slow on over 2k x 2k images.
As I can't find any documentation about speeding this up, I'm filing this in hopes their is some possible means of performance increase in a future release.
The text was updated successfully, but these errors were encountered: