Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Odd colour display of images when scale>1.0 #38

Open
jlstevens opened this issue Sep 17, 2014 · 1 comment
Open

Odd colour display of images when scale>1.0 #38

jlstevens opened this issue Sep 17, 2014 · 1 comment
Milestone

Comments

@jlstevens
Copy link
Contributor

One issue with the recent addition of colour display of Imagen pattern generators is that sometimes odd colour artifacts appear. I have traced this down to issues with scale values greater than 1.0. For instance:

FileImage(aspect_ratio=1.0,scale=2.0, 
          filename='./images/mcgill/foliage_a_combined/01.png')[:]

The issue is that the RGBA SheetViews expect values between 0.0 and 1.0. When the scale is increased, eventually some channel will get clipped into this range.

Although the fix is easy for GenericImage it isn't so clear what to do with other classes that use multiple channels. What is the best thing to do for ComposeChannels and composite types where the maximum scale value is a function of multiple pattern generators/inputs?

@jbednar
Copy link
Contributor

jbednar commented Sep 19, 2014

This is a tricky issue. In terms of our simulations, I don't think that having RGB or LMS values above 1.0 is actually a problem; I don't think there's any clipping or overflow that would happen inside our actual models (I hope!).

Right now, it must be being clipped eventually by matplotlib for display, and I'm guessing that it's doing such clipping as RGB, not HSV, leading to the observed artifacts. I.e. if we have an orange color like (R=255,G=128,B=0), if we scale it up by 2.0 it will be (R=510,G=256,B=0), which turns into bright yellow when clipped to one byte in RGB (R=255,G=255,B=0).

If it's just about the display, then I think we have several options:

  1. We could just leave it, because it tells us very clearly when clipping is happening, which we might want to know.
  2. We could clip any out-of-range pixels in HSV space, which just means normalizing them by their highest value: (R=510,G=256,B=0)*(256/510) = (R=256,G=128,B=0). Areas with clipping should then look washed out (like an overexposed photo), but otherwise normal.
  3. We could normalize the entire image for display purposes -- if any pixel is out of range, renormalize to make the brightest pixel 255 (1.0 in our native format). Pictures will then look normal, but the scale will be misleading, because it will appear dimmer than it really was (compared to other images not exceeding the threshold).

If all this is correct, then I probably favor either option 1 or 2, with 2 probably better (in the sense of not alarming people when there isn't really a problem) but 1 being much less work.

@jbednar jbednar added this to the Wishlist milestone Oct 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants