-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
improved single precision support in skimage.transform #5372
Conversation
make sure float16 is promoted to float32 when necessary
make sure floating point integral images are computed in at least double precision
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @grlee77.
@grlee77, the type promotion to double in |
It currently only preserves float dtypes. Apparently np.cumsum already uses int64 for, e.g. uint8 inputs. Otherwise you would be pretty much guaranteed to get integer overflows! I am just arguing that for accuracy sake, we should be promoting float inputs to float64. We could still restore the final output to the input float type, though, as a compromise. |
Another simple example. Suppose the image is somewhat large, but has float16 dtype. In that case it is also likely to encounter floating point overflow (if code is as in current integral_image(np.ones((2048, 2048), dtype=np.float16)) gives
|
I understand the overflow problem, we can then consider this is a bug fix. May be should we document this behaviour... |
I reverted the change to default dtype in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @grlee77! I wasn't sure why float16 wasn't systematically included in the floating-point types when testing with parameters, so please let me know for my personal understanding, but this PR is good to go!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merging 🎉 Thank you @grlee77
Description
The warp, pyramid and radon transform functions all already supported float32, but this PR expands the tests a bit and makes sure float16 gets promoted to float32 rather than failing.
For
integral_image
, I changed it to promote any floating-point types to double precision for better accuracy withnumpy.cumsum
. (As far as I know, cumsum does not use the higher accuracy pairwise summation that some NumPy sums use). We had seen issues with limited precision of integral images in the past with non-local means and had changed the Cython code internal to that function to always use doubles for the integral image computation.Checklist
./doc/examples
(new features only)./benchmarks
, if your changes aren't covered by anexisting benchmark
For reviewers
later.
__init__.py
.doc/release/release_dev.rst
.