-
-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Choose which colorspace to use for proximity #14
Comments
This is a difficult issue, one that goes right to the limits of my knowledge. This is what I know about colour finding when dithering -- we're not supposed to think about human perception:
However, as we can see in this case, at the human level this method can fail and look very different. In my past experiments for the dither library I've tried to use other colour spaces, but it didn't really work (end result wasn't very good), and I was also informed that it was the Wrong Way. For my library, my focus was on what is mathematically correct, rather than playing with math I didn't understand until it looks good. But the result is not always perfect, as we see here. I still don't know yet if this is because my implementation is incorrect, or because the industry standard way of doing things wasn't designed with colour in mind (only grayscale). I've been meaning to ask a StackExchange question about it or something. This issue reminds me of makew0rld/dither#9 but changing the palette order doesn't seem to change anything in this case. Let me know if you find that it does. These issues may still be related. So unfortunately I don't have a satisfying answer to this right now. This kind of issue has been on my mind for a while though, and so I plan to figure it out. I will update this thread once I do. |
i had a poke at a different dithering source to try and compare and see if it produced different results. I tried ditherit.com with the same source image. To save people time reproducing it, you can import the color palette below, it's the same as the one used in the example above.
The result was this: That said, it simultaneously seems to have created a new issue with the coloring on the faces containing too much brown, to my eyes. This presents a strange problem. It is not clear which of these, if either, is "correct". But these are my findings. |
Thanks for this. ditherit.com uses luminance-weighted sRGB, using this code. I was actually already discussing this technique today elsewhere, maybe switching to that is the way to go. I will have to research this and try and consult others to see if it makes sense from a theory standpoint, not just "looks good on this image". But it seems promising! I think it's possible that with this change my tool will get the best of both worlds, like the faces would be red and so would the region above the demon's head. Because ditherit.com is using a crude linearization (squaring) while my code uses an accurate one. I would like to try this experiment (my library with luminance-weighted RGB) soon, likely in two weeks. |
See this issue for why this was done: makew0rld/didder#14
Okay I tried this out and the result is promising! Let me know what you think. I also tried this out with some test images and got improved results. For example with the peppers image: This is the old output: This is the new output: The size of the highlights and shadows are much more accurate. This example may be contrived since the palette is pure red, green, and black, but I still think this is another point in the weighted method's favour. I will wait to merge this to the library. I have submitted a Computer Graphics SE question that I hope will allow a "higher power" such as an academic to confirm that this is the right way to go. If no one responds for a while, I may just end up merging this in and making new releases. If anyone has other test images that they like to try, please let me know if the results are good or bad! You can compile this altered version of the CLI by building off the new |
I'm pleased to report that this issue is now fixed! The code has been merged and a new release of didder has been made. I hope this fix addresses your concerns, let me know if you have anything further to discuss. |
i think it looks better for sure!! great work :) out of interest, do you weight all channels equally or use channel-specific weightings? |
Thanks! Channels are weighted according to how humans perceive luminance. So green has the most weight as humans are most sensitive to green, and blue has the least. The specific values are taken from Rec.709 / sRGB: 0.2126 red, 0.7152 green, and 0.0722 blue. |
neat :) |
Sometimes the dithering seems to not get the gist of a color region. For example, in the following images, look at the shadowy region around the demon's head.
Here's the original image:
And here's the output after this command was run:
didder --palette "#242022 #f5ede6 #eb4e48 #b18661" -i krampus.jpg -o test.png -c size edm --serpentine FloydSteinberg
To my human eyes, the shadow region behind and above the demon's head seems like it would be described with a mix of red and black. However, probably due to it being a mid-brightness color, it is dithered as a mixture of brown and black, meaning it loses the red tones.
My hunch is that this effect is due to the colorspace used. Perhaps if the colorspace were treated as HSV rather than RGB, the colors would be considered closer to red than brown.
Additionally, it might help to use a form of gamma-correction to find the difference between colors, to better represent the way that they are perceived to human eyes.
The text was updated successfully, but these errors were encountered: