Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting this 16-bit grayscale image to 'L' mode destroys it #3011

Closed
ExplodingCabbage opened this issue Feb 20, 2018 · 9 comments
Closed

Comments

@ExplodingCabbage
Copy link

ExplodingCabbage commented Feb 20, 2018

Here is a 16-bit grayscale image:

test

It clearly contains a gradient of grays.

If I open it with Pillow, convert it to 8-bit grayscale, and save it, like so...

>>> from PIL import Image
>>> test_img = Image.open('test.png')
>>> test_img.mode
'I'
>>> test_img.convert('L').save('out.png')

... then I get this, which is mostly completely white:

out

This conversion should work; according to http://pillow.readthedocs.io/en/4.2.x/handbook/tutorial.html#converting-between-modes

The library supports transformations between each supported mode and the “L” and “RGB” modes.

and according to http://pillow.readthedocs.io/en/latest/handbook/concepts.html#modes, I is a supported mode.

Notably, I don't see this same problem if I start by loading an 8-bit RGB image from a JPG and then do .convert('I').convert('L'), so it's not simply the case that I->L conversion is broken in general. I'm not what specifically leads to the breakage in this case.

@ExplodingCabbage ExplodingCabbage changed the title Converting this 16-bit image to grayscale destroys it Converting this 16-bit grayscale image to 'L' mode destroys it Feb 20, 2018
@wiredfool
Copy link
Member

There's a longstanding behavioral issue with Pillow where conversions don't intelligently use the range of the target mode.

So, in this case, you're taking an image with values from 0-65k and converting it to 0-255, with clipping. Most of the values are > 255, so they're all white. When you start with an 8 bit image, you convert 0-255 to 0-65k, but since there's no promotion, it's still just a 0-255 image. Converting back is then not a problem.

@aclark4life aclark4life added the Bug Any unexpected behavior, until confirmed feature. label Apr 1, 2018
@aclark4life aclark4life added this to the Future milestone Apr 1, 2018
@wiredfool wiredfool added Enhancement and removed Bug Any unexpected behavior, until confirmed feature. labels Apr 2, 2018
GuillaumeErhard added a commit to GuillaumeErhard/keras-preprocessing that referenced this issue Jun 28, 2018
Avoid needless downsampling of 16 bits gray image to 8 bits, and current clipping from Pillow is wrong, see :
python-pillow/Pillow#3011
@radarhere
Copy link
Member

This looks related to #3159

@radarhere
Copy link
Member

I've created PR #3838 to resolve this.

@aclark4life aclark4life added this to Backlog in Pillow May 11, 2019
@aclark4life aclark4life moved this from Backlog to In progress in Pillow May 11, 2019
@radarhere radarhere moved this from In progress to Review/QA in Pillow May 11, 2019
@radarhere
Copy link
Member

Resolved by #3838

Pillow automation moved this from Review/QA to Closed Jun 5, 2019
@radarhere
Copy link
Member

radarhere commented Jun 11, 2019

It turns out that this situation is more complicated. See #3838 (comment)

@radarhere radarhere reopened this Jun 11, 2019
Pillow automation moved this from Closed to New Issues Jun 11, 2019
@radarhere radarhere moved this from New Issues to In progress in Pillow Jun 11, 2019
@jamesjjcondon
Copy link

jamesjjcondon commented Jul 17, 2019

FYI, my work around for my use case (large gray-scale images):

x = np.linspace(0, 65535, 1000, dtype=np.uint16)
image = np.tile(x, (1000, 1)).T
plt.imshow(image)
plt.show()

im32 = image.astype(np.int32)
pil = Image.fromarray(im32, mode='I')

Shouldn't lose precision.

@machin3io
Copy link

machin3io commented Nov 9, 2019

For I to L conversion, this works for me:

def convert_I_to_L(img)
    array = np.uint8(np.array(img) / 256)
    return Image.fromarray(array)

@smason
Copy link
Contributor

smason commented Apr 6, 2021

I wanted a non-numpy based solution and came up with:

def convert_I_to_L(im: Image):
    return ImageMath.eval('im >> 8', im=im.convert('I')).convert('L')

comparing this to machin3io's numpy code, ImageMath is faster for smaller images while numpy wins for larger ones:

           conversion time in ms
dimensions   ImageMath  Numpy
 640x 480          1.0    1.3
1280x 960          4.1    3.5
1920x1440          9.3    8.4

@radarhere
Copy link
Member

Closing as part of #3159

Pillow automation moved this from In progress to Closed Dec 30, 2022
nmanovic pushed a commit to cvat-ai/cvat that referenced this issue Jan 11, 2023
Pull Request regarding Issue #2987 

PIL.Image conversion from I;16 to L or RGB are unsuccessful as for now.
See the corresponding Issue in the Pillow GitHub (Opened 2018, so no
changes to be expected)
python-pillow/Pillow#3011

The proposed changes at least fix this issue for the mode 'I;16' and
delivers a possible solution for other modes (eg. I;16B/L/N).

This results in a correct calculation of the preview thumbnail and the
actual image, the annotation will be performed on.

We have used this solution on our own dataset and created annotations
accordingly.
mikhail-treskin pushed a commit to retailnext/cvat that referenced this issue Jul 1, 2023
Pull Request regarding Issue cvat-ai#2987 

PIL.Image conversion from I;16 to L or RGB are unsuccessful as for now.
See the corresponding Issue in the Pillow GitHub (Opened 2018, so no
changes to be expected)
python-pillow/Pillow#3011

The proposed changes at least fix this issue for the mode 'I;16' and
delivers a possible solution for other modes (eg. I;16B/L/N).

This results in a correct calculation of the preview thumbnail and the
actual image, the annotation will be performed on.

We have used this solution on our own dataset and created annotations
accordingly.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Pillow
  
Closed
Development

No branches or pull requests

7 participants