-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SLIC segmentation results differ depending on input image datatype #5512
Comments
I think it is not so much the datatype, but the scaling that matters here. Internally Using the following gives img_f32 = img.astype(np.float32)
img_f32 /= img_f32.max()
labels_1 = segmentation.slic(img_f32) For this algorithm I think we should always just rescale internally to [0.0, 1.0] range so the default |
closes scikit-imagegh-5512 Currently img_as_float rescales the range of integer inputs, but leaves floating point range unnormalized. The compactness parameter needs to be tuned based on image scale, so it is based to just always rescale to range [0, 1] internally so the default is a reasonable starting point regardless of the input image scale.
* Make SLIC superpixel output robust to image scale closes gh-5512 Currently img_as_float rescales the range of integer inputs, but leaves floating point range unnormalized. The compactness parameter needs to be tuned based on image scale, so it is based to just always rescale to range [0, 1] internally so the default is a reasonable starting point regardless of the input image scale. * TST: add slic test case veryify same segmentation across input amplitudes/dtypes * Use copy=True so internal rescaling cannot modify the input
I just realized that segmentation output can vary a lot depending on the datatype of the input array.
See reproducible example below
Version information
The text was updated successfully, but these errors were encountered: