Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scaling should not be ignored #7

Open
xi opened this issue Jul 29, 2022 · 14 comments
Open

Scaling should not be ignored #7

xi opened this issue Jul 29, 2022 · 14 comments

Comments

@xi
Copy link
Owner

xi commented Jul 29, 2022

@Myndex raised some points in other issues.

From #3 (comment):

you are not calculating APCA the way APCA is intended to be used. You wave off the scaling which you consider unimportant, but which is key for perceptual uniformity. You can not just disregard aspects of APCA design because you don't like them.

From w3c/silver#651 (comment):

First, he makes claims that WCAG 2 and APCA are "not that different" but then instead of showing that in a way that would be clear (he can't as it is not true) he makes a gross modification to crudely and incompletely reverse engineer the APCA contrast curves

@xi
Copy link
Owner Author

xi commented Jul 29, 2022

Short answer: I do think your criticism is not founded and that both scaling and perceptional uniformity are not relevant for efficacy. I am not completely sure that scaling does not lead to artifacts in some of my other results though.

I am happy to discuss this further because I think this could lead to a more general framework for comparing contrast formulas. So here is the long answer:

Goals

A good contrast formula should have the following properties

  • efficacy - It allows to decide whether a given color combination has sufficient contrast. This is usually achieved by pairing the formula with a threshold value so that any contrasts above that threshold are seen as sufficient. There may also be multiple thresholds for different levels of conformance.
  • ergonomics - The formula is easy to use. For example, the threshold values should be easy to remember.
  • perceptual uniformity - If the result of the formula is twice as high, the perceived contrast is also twice as high.

These properties are sorted by priority: Efficacy is more important than everything else. Ergonomics are also relevant. Perceptual uniformity is a nice feature to have in general, but it is not really relevant in the context of WCAG (as long as we have efficacy).

Types of analysis

In order to compare two different contrast algorithms, I did two different kinds of analysis:

Equivalence

Two contrast formulas f() and g() are equivalent if they lead to the same results.

From the statistical perspective, they are equivalent if they put each color pair into the same threshold category. The amount of color pairs that are put into the same threshold category can be seen as a measure of similarity.

From the analytical perspective, they are equivalent if, for every two color pairs (a, b) and (c, d):

f(a, b) < f(c, d) => g(a, b) < g(c, d)

In other words, they are equivalent if there is a strictly monotonic map between them. We call such a map a "scaling".

For example, Weber contrast and WCAG 2.x contrast are equivalent because (fg - bg) / (bg + 0.05) = (fg + 0.05) / (bg + 0.05) - 1 and f(x) = x - 1 is monotonic.

Also note the following corollary:

f(a, b) = f(c, d) => g(a, b) = g(c, d)

Scaling preserves efficacy, but has an impact on ergonomics and perceptual uniformity. So it can be used to make an existing formula more ergonomic and more perceptually uniform.

Normalization

So if scaling does not impact efficacy, we can scale all formulas to a normalized form so we can better compare them. The trouble is: there are a lot of monotonic functions (they do not have to be smooth!) so it is not obvious how to define a single normalized form.

The steps I used were:

  • Convert the contrast formula to a ratio (e.g. by using exp() for differences)
  • Use a power to normalize the maximum value to 21

As you can see, I used WCAG 2.x as a template and scaled APCA to its range. It would also have been possible to do it the other way around.

These scaling steps have the benefit that they can be pushed into sRGBtoY(), so you can do further comparison by looking at single colors instead of color pairs.

This normalization allowed me to compare APCA to WCAG 2.x analytically. This way I found the modified WCAG formula with a flare of 0.4. Statistical analysis confirmed that it is much more similar (but still far from equivalent) to APCA.

But of course there are many other monotonic maps that would map the formula to the same range. I am not yet sure which impact a different scaling would have.

Summary

I am convinced that my reasoning about equivalence is correct. Therefore I do not think that I made "gross modification" to APCA. (For completeness: the scaling I used was not strictly monotonic for extremely low contrast.) Perceptual uniformity may have been lost, but it is not relevant for efficacy.

I am less sure about the normalization. If anyone knows any research on this, I would be very interested!

@Myndex
Copy link

Myndex commented Jul 29, 2022

Short answer: I do think your criticism is not founded and that both scaling and perceptional uniformity are not relevant for efficacy. I am not completely sure that scaling does not lead to artifacts in some of my other results though.

That is a completely false and ultimately naive point of view, and demonstrates a lack of understanding of the key design concepts. Again, this is not your field of knowledge by your own admission, so your continued assertions of your uninformed opinion is nothing short of infuriating.

Again: in order to facilitate automated contrast adjustment technologies, it is REQUIRED to have a perceptually uniform model. You don't have to be a math genius to see why, and this has been demonstrated ad nauseam.

These properties are sorted by priority: Efficacy is more important than everything else. Ergonomics are also relevant. Perceptual uniformity is a nice feature to have in general, but it is not really relevant in the context of WCAG (as long as we have efficacy).

These statements are false, or at best, uninformed opinion, and further demonstrate your lack of understanding of the subject matter here.

Perceptual uniformity is extremely relevant because without it, the tool DOES NOT have efficacy. Get it?

In order to compare two different contrast algorithms, I did two different kinds of analysis: * A [statistical analysis] * An [analytical analysis]

But you were not using the math as developed for these, you created your own derivations, you discarded HALF of the APCA method outright, and then you created a crudely reverse engineered version for the majority of your faux analysis that served not to illuminate but only to obfuscate and mislead.

This is not an "analysis."

The steps I used were:
......I used WCAG 2.x as a template and scaled APCA to its range. It would also have been possible to do it the other way around.....

This is not a valid approach. And you are comparing corves of completely different types that model completely different phenomena.

These scaling steps have the benefit that they can be pushed into sRGBtoY(), so you can do further comparison by looking at single colors instead of color pairs.

MASSIVE error in your approach here. MASSIVE. This demonstrates a complete lack of understanding the relevant concepts of local adaptation effects.

In fact, the larger APCA models use three, four, and even five color inputs—the ONLY reason that the public facing version is a mere pair was to simplify for the guidelines. The APCA model is much more detailed than that, but your analysis fails because of your attempts to isolate in inappropriate ways.

This normalization allowed me to compare APCA to WCAG 2.x analytically. This way I found the modified WCAG formula with a flare of 0.4. Statistical analysis confirmed that it is much more similar (but still far from equivalent) to APCA.

Well, I am glad you are now admitting it is "far from equivalent". Please go back and redact your claims in other posts where you say they are "very similar" as that is false even in light of your analysis.

PARTICULARLY, redact the post you made in the SILVER repo.

I am convinced that my reasoning about equivalence is correct.

It is incorrect as I have stated, because it attempts to abstract the math without regard for the underlying vision science. If you can't understand this,I don't know how to reach you. Yu have already stated that you are not interested in reading the documentation, where I have laid it all out.

@Myndex
Copy link

Myndex commented Jul 30, 2022

These properties are sorted by priority: Efficacy is more important than everything else. Ergonomics are also relevant. Perceptual uniformity is a nice feature to have in general, but it is not really relevant in the context of WCAG (as long as we have efficacy).

You can not have any efficacy without perceptual uniformity, and if you do not understand that then you are not understanding the subject matter. Efficacy IS perceptual uniformity. Until you grasp that most basic of concepts, this entire exercise is moot.

This is all well discussed in the mountain of peer reviewed scientific consensus on the subject of color appearance modeling and visual perception. By your own admission you are not educated in this area, and your uninformed opinions herein underline that fact.

I do realize that there is a small group that wants to pretend that there is some "special" nonsense that WCAG 2 somehow provides for, but these false claims have been disproven time and time again. And if I may, it is rather bad to use math to obfuscate the issue as you have done. It is beyond infuriating, now that I have more completely analyzed your approach. What are you really trying to present here?

You already made a post in Silver claiming that

Math.log((Ybg + 0.4) / (Yfg + 0.4)) * 80

Is a suitable replacement for APCA, in an issue post with the extremely provocative title "APCA is very similar to WCAG 2.x contrast with a higher value for ambient light" which itself is untrue and at best misleading, but moreover completely infuriating. And I know you know thia.

Why not ask first?

HAD YOU FIRST used the discussion area at APCA to ask questions here to help fill in your misunderstandings, that would be one thing. But instead you created this repo which after lengthy analysis is missing so many of the key facts, and twisting the math around in such a completely convoluted and inappropriate manner, it comes across as an attack.

If otherwise, I am happy to entertain your comments as to your motivations here, but that is how it appears.

I am sure you are well aware that the BEST WAY to anger an inventor is to reverse engineer their work and do so crudely and incompletely. Or to analyze the work out of context, which the bulk of analysis does.

I forked this repo so that I can provide a more complete blow by blow rebuttal of this.

@xi
Copy link
Owner Author

xi commented Jul 30, 2022

You can not have any efficacy without perceptual uniformity, and if you do not understand that then you are not understanding the subject matter. Efficacy IS perceptual uniformity. Until you grasp that most basic of concepts, this entire exercise is moot.

It may be that I am not using the word "efficacy" correctly. Let me try to explain what I mean:

Say you have a contrast formula f and some thresholds. WCAG 3 could for example say "the contrast of body text to background as measured by f must be bigger than 20".

If f is perceptually uniform we can find thresholds so that this gives us useful results. So yes, I agree and understand that perceptual uniformity implies efficacy.

You are claiming that efficacy also implies perceptual uniformity. I can disprove that with a simple counterexample: Consider f + 1. This is not perceptually uniform. However, if I also shift all thresholds, I get the exact same results, so efficacy is not affected.

The key here is that "perceptual uniformity" refers to f while "efficacy" refers to f in combination with a set of thresholds.

Please let me know if there is a better term for this than "efficacy". But nomenclature aside, I see it as mathematically proven that scaling does not effect efficacy in the sense I described. I am happy to discuss if there are issues with the math or nomenclature. Unless those are brought forward I will close this issue.

@Myndex
Copy link

Myndex commented Aug 1, 2022

@xi

We have a significant disconnect in communication here, similar to that I encountered with a user known as JAWStest, who is also focused on contrast. I am really not sure how to cure this, though in another post I indicated several books that you would find very helpful.

Efficacy just means that is "does what is intended". What is intended is perceptually uniform contrast values over the visual range.

Perceptually uniform MEANS that a given contrast value of two colors will have the same perceptual value as that same contrast value defining two completely different colors.

For readability, we are ONLY concerned with achromatic luminance.

Therefore:

We want Lc 60 where one color is white, to be of the same readability contrast as Lc 60 when one of the colors is black. APCA achieves this, and WCAG 2 does not.

In WCAG 2, 4.5:1 when one color is white is completely different than 4.5:1 when one of the colors is black. This is well known, and a recognized deficiency of WCAG 2.

@xi
Copy link
Owner Author

xi commented Aug 3, 2022

Thanks for these clarifications and also for the recommendations in the other discussion.

Please understand that I am still learning, and the contents of this repository represent my current level of understanding. Therefore it is great when people with more experience point me to things that can be improved. But I cannot improve a section unless I fully understand the issue.

So let me try to repeat what you wrote in my own words to check whether I have understood correctly:

It seems to me that the issue was not with the term "efficacy", but with "perceptual uniformity". I wrote:

perceptual uniformity - If the result of the [contrast] formula is twice as high, the perceived contrast is also twice as high.

You wrote:

Perceptually uniform MEANS that a given contrast value of two colors will have the same perceptual value as that same contrast value defining two completely different colors.

So I thought that "perceptual uniformity" means that there is a proportional relation between the formula and perceived contrast, which according to your definitions is not strictly necessary. With that definition, I understand your statement that efficacy is perceptual uniformity.

The X-rite glossary also writes:

A color space in which equivalent numerical differences represent equivalent visual differences, regardless of location within the color space.
-- https://www.xrite.com/learning-color-education/other-resources/glossary#U

This is very similar to what you wrote, but there is a subtle difference that I think could be another source of misunderstanding: You write specifically about contrast, but X-rite writes about any "numerical difference". If I understand correctly, the term "perceptually uniform" can be applied to different aspect of vision. And "perceptually uniform contrast" is not necessarily the same as "contrast based on perceptually uniform lightness".

So bringing perceptual uniformity into this was my mistake because my understanding of it was somewhat off. Sorry for the detour!

However, if a contrast formula has efficacy, i.e. "does what is intended", I still don't see how applying a strictly monotonic map to the results could ever destroy that property. What am i missing?

@Myndex
Copy link

Myndex commented Aug 12, 2022

Hi @xi

Please understand that I am still learning, and the contents of this repository represent my current level of understanding. Therefore it is great when people with more experience point me to things that can be improved. But I cannot improve a section unless I fully understand the issue.

If you are learning, you should be asking questions, not asserting opinions and unsupported statements as if they are facts. Herein lies the infuriating disconnect.

So I thought that "perceptual uniformity" means that there is a proportional relation between the formula and perceived contrast

It is both the consistency of change in value relative to perception (∆Lc = ∆V), AND that a given value is ALSO consistent across the visual range, and this last part is the most important.

In the case of WCAG 2 contrast, 4.5:1 does NOT mean the same thing if one of the colors is light or white, vs 4.5:1 when one of the colors is dark or black. It is NOT perceptually uniform.

....If I understand correctly, the term "perceptually uniform" can be applied to different aspect of vision. And "perceptually uniform contrast" is not necessarily the same as "contrast based on perceptually uniform lightness".

Here you are conflating similar but different things.

First of all, the term "perceptually uniform" applies to all forms of perception: visual, aural, tactile. Out of context, "perceptually uniform" lacks specific meaning, other than the basic concept:

  • RELATIVE: A given numerical value calculated by a perceptually uniform model should relate to an equivalent perception across a given range for a given context.

    • For Lc contrast, the context is:
      • readability of small text on a self-illuminated display under standard conditions, meaning:
        • In an ambient illumination at 20% of the display peak white.
        • Where the text is 0.2° to 2.0° in visual angle
        • Where both colors are less than white or greater than black
          • (This is short hand for stating that if one color is white or black the other color shall be nore more than one code value closer than the case of one code value less than or one code value greater than the maximum)
    • In this context a given Lc value when one color is black is perceptually uniform for readability for the same Lc value when one color is white.
  • DELTA: A given change in numerical value calculated by a perceptually uniform model should relate to a consistent degree of perceptual change change across the range

    • For Lc contrast, this relates to the contrast matching experiments, which work by dividing the available contrast range in half, and also the above noted context.
    • Therefore, dividing an Lc value by 2 results in a perceived halving of the perceived contrast.
    • Except for stimuli above the contrast constancy level, or for stimuli that are outside the visual contrast range for the given spatial frequency.

@Myndex
Copy link

Myndex commented Aug 12, 2022

I still don't see how applying a strictly monotonic map to the results could ever destroy that property. What am i missing?

You are conflating linear and non linear maths, and doing so wrongly, and further comparing functions with different purposes ignoring the basis, approximating exponents and overall corrupting the math, which you are then using to make assertions such as "oh it's not that different".

My question to you is why? Did you decide to develop this on your own? Or who or what group encouraged you to "analyze" the math while fully ignoring all of the underlying vision science?

You are relying on "monotonic" thresholds, which is a spurious assumption on your part. While WCAG 2 may be based around two or three specific "levels" that does not mean that such a guideline is ideal or even useful. It is especially NOT useful in the context of automated color selection and context of use cases and spatial frequency considerations.

SPATIAL FREQUENCY is a primary driver of contrast, Your "mathematical analysis" strips all of the spatial frequency sensitivity out of the formula. Therefore your analysis is not valid.

Contrast is a perception and it is NOT only about the distance between two colors, yet all of your analysis seems to be continuing this wrong assumption, and this wrong assumption is a key deficit of the WCAG 2 contrast method.

I don't know how else to make this clear to you.

@xi
Copy link
Owner Author

xi commented Aug 13, 2022

I am not "conflating linear and non linear maths". I have asked a very specific question and you failed to answer it in your following two comments. "You are doing so wrongly" is not an argument.

I will reopen this issue if anyone can provide an actual argument how applying a strictly monotonic map to both the color contrast formula and the thresholds (which may or may not be defined based on spatial frequency) can influence efficacy. Until such an argument is brought forward this issue will remain closed.

@Myndex
Copy link

Myndex commented Aug 16, 2022

I have been trying to explain to you why you are wrong in a way that you can understand. Since you are a math major, try this:

How do you determine the hypotenuse (c)? Euclid instructs us:

a² + b² = c²

But you are then claiming that a + b = c, and this is not true.

The square root of (a² + b²) IS NOT equal to (a + b), yet this is what you are claiming with your monotonic argument.

But in the "analysis" you are doing, you are essentially making that assertion.

You are also comparing "non-perceptual lightness" of WCAG 2's luminance to the perceptual lightness curves of APCA. This is a fully invalid comparison. It is an "apples and apricots" comparison at best.

Your assertion that 0.4 is an ambient flare component is fully unsupported by any science. A trivial measurement of the actual flare in anything resembling a standardized environment demonstrates that this is far from the case, and has absolutely no basis in fact.

An Alternate Examples

Going back over the 2019 trials, I revisited an earlier model, and I just released it at DeltaPhiStar as a general purpose perceptual contrast algorithm.

   deltaPhiStar = (Math.abs(bgLstar ** 1.618 - txLstar ** 1.618) ** 0.618) * 1.414 - 40 ;
    
    // ** is equiv to Math.pow

You could remove the final scaling so that:

$$ (bgLstar^{1.618} - txLstar^{1.618})^{0.618} $$

But this is irreducible. You can not:

$$ ( bgLstar - txLstar ) $$

And hope to be in any way similar in result.

In short, you are stating that you are only "examining the mathematical properties" while ignoring the "visual science aspects" which is a spurious and incongruent argument when the math is specifically modeling the visual quantities.

@xi
Copy link
Owner Author

xi commented Aug 17, 2022

Reopening because this really seems to be at the heart of our misunderstanding.

We might be getting somewhere. In your last example you agree that some steps can be skipped, e.g. the - 40. You also say that some other steps (e.g. the ** 1.618) cannot be skipped without losing important properties. I fully agree with that. The relevant question is: which steps can be skipped and which cannot be skipped?

My answer to this is: You can apply any strictly monotonic map to the final result. In your example, (deltaPhiStar + 40) / 1.414 is monotonic, so your first simplification fits that criterion:

(((Math.abs(bgLstar ** 1.618 - txLstar ** 1.618) ** 0.618) * 1.414 - 40) + 40) / 1.414
= ((Math.abs(bgLstar ** 1.618 - txLstar ** 1.618) ** 0.618) * 1.414) / 1.414
= Math.abs(bgLstar ** 1.618 - txLstar ** 1.618) ** 0.618

Your second simplification cannot be expressed as a strictly monotonic map, so we agree there, too. Counterexample:

(1 ** 1.618 - 0.5 ** 1.618) ** 0.618 > (0.6 ** 1.618 - 0 ** 1.618) ** 0.618
// 0.7837845640692319 > 0.7007155414031868

1 - 0.5 < 0.6 - 0
// 0.5 < 0.6

So far, we are in full agreement. I especially agree that you cannot skip the exponents on the individual lightness values. However, I would say that you can skip the ** 0.618 because x ** (1 / 0.618) is strictly monotonic. In other words:

if deltaPhiStar > threshold
then ((deltaPhiStar + 40) / 1.414) ** (1 / 0.618) > ((threshold + 40) / 1.414) ** (1 / 0.618)

That's why I would say that the irreducible form of deltaPhiStar is actually

bgLstar ** 1.618 - txLstar ** 1.618

Does this make sense to you?

@Myndex
Copy link

Myndex commented Aug 17, 2022

However, I would say that you can skip the ** 0.618 because x ** (1 / 0.618) is strictly monotonic.

NO IT ISN'T. And it especially is not in terms of a "comparison" to some other method like WCAG 2, and especially not in reference to "thresholds" that you are stuck on.

APCA is NOT about "thresholds" it is about a sliding scale, as per perception.

The "plain thresholds" is a bronze level simplification to appease someone that was claiming that the system is too complicated. But that is not a valid comparison for what APCA is and can perform. The Silver level is a sliding scale, and with use cases. None of this is taken into account with how you are handling this "analysis".

@xi
Copy link
Owner Author

xi commented Aug 19, 2022

I was reading On the psychophysical law by Stevens and found this quote:

But let us pursue for a moment the problem of what we might do when we have equated a set of ratios: a/b = b/c = c/d …. We can assume that we have an operation for ordering these values and that a < b < c < d …. The problem then is, how may we assign numerical values to this series?
[…]
An obvious suggestion is that we might convert to logarithms and express the equated ratios as log a - log b = log b - log c = log c - log d, etc. If we then restrict the values of a, b, c, … to positive numbers, we can set up an interval scale in logarithms. There is no a priori reason why we could not put this scale to work in a manner analogous to the workings of our linear interval scales.

This may not be exactly what I did, but it indicates that this kind of scaling is not completely unheard of.

@Myndex
Copy link

Myndex commented Aug 19, 2022

Irrelevant whataboutism, this is totally not the point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants