Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contrast Ratio Math and Related Visual Issues #695

Open
Myndex opened this issue Apr 14, 2019 · 67 comments

Comments

Projects
None yet
9 participants
@Myndex
Copy link

commented Apr 14, 2019

The W3C's specification for determining sRGB contrast as discussed in "Understanding WCAG 2.0 and 2.1, Minimum Contrast 1.4.3" is not perceptually uniform and as a result creates "contrast ratios" that are not meaningful. The end result is incorrect contrast choices for some web colors. Compounding the problem are the number of "contrast tools" based on this math all over the web, and all of which are returning invalid data.

The end result are websites that may comply with the W3C's math for contrast, but are otherwise difficult to read. The bad math coupled with contrast tools have provided designers with color schemes of poor accessibility. This needs to be addressed!

PROBLEM SUMMARY

(L1 + 0.05)/(L2 + 0.05) (aka "simple contrast") is only really useful to determine monitor max on/off (#FFF / #000). It fails badly in the midrange colors as it does not adjust for human nonlinear perception. As such it cannot be used to programmatically determine legibility of colors, esp. in the middle and darker ranges.

Weber contrast, Michelson Contrast, Bartleson-Breneman Perceptual Contrast Length (PCL), or other possible candidates are better choices for programatic legibility assessment. I am currently conducting studies on a "best" programatic contrast assessment algorithm for UI/Web design and will update this issue as I do. I am presently leaning toward a variation of PCL as it prevents the near-black contrast expansion. Using this and applying an exponent to the luminance data looks promising.

Web Links to some of the pages of the document:
https://www.w3.org/TR/WCAG21/#dfn-contrast-ratio
https://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-contrast
https://www.w3.org/TR/2008/REC-WCAG20-20081211/#relativeluminancedef

PROBLEM EXAMPLE:

I prepared a webpage that demonstrates the problem here:
http://myndex.com/WEB/W3Contrastissue

Here is a reduced resolution screenshot of part of that test page.

Screen Shot 2019-04-14 at 3 24 25 AM

In the above experiment, we set a number of panels to color-pairs with a contrast ratio of 4.5:1, this counts as a "PASS" for the W3C spec of minimum contrast for small text. Interspersed among these panels are color-pairs that the W3C criteria counts as a "FAIL" even for large text, with a contrast of 2.9:1

As you can see, many of the "PASS" color pairs are actually hard to read and of low contrast, while all of the "FAIL" pairs are substantially easier to read and of higher perceptual contrast.

The point here is that the "contrast ratios" created by the equations listed in the WCAG documents are not useful or meaningful for determining perceptual luminance contrast.

Part of the reason this is happening is using simple contrast (L1/L2) Simple Contrast fails to account for non linear human perception in values between #000 and #FFF. Also troubling is the use of outdated standards documents or drafts. I list these issues on the webpage:
http://myndex.com/WEB/W3Contrastissue

The upshot of all this is that if "contrast ratios" are going to be promoted as a means to define color for accessible design, then there needs to be a clear path to assess contrast based on human perception.

Looking for a Solution (EDIT 4/25/19: Better solutions in later posts)

One idea is to process the luminance with an exponent (^1.6) then take 1/3rd the contrast result, using either weber contrast or perceptual contrast length.

The purpose of the exponent is to shift contrast for black/dark text vs white/light text, this adjusts for our perception that light text on a medium or darker background has a higher perceived contrast than dark text on a medium background. The purpose for taking 1/3rd of the result is to bring the output numbers into line with the W3C standard indicating a 4.5:1 contrast.

A more ideal solution would be to commission a study with human subjects of various visual impairments to fine tune a model for programatic contrast assessment.

-Andrew Somers
Title Supervisor
General Titles & Visual Effects
Hollywood, Ca.

@patrickhlauke

This comment has been minimized.

Copy link
Member

commented Apr 14, 2019

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 14, 2019

xref #360 (comment)

Thank you Patrick but as you can see I already posted in that thread. That is a separate and somewhat minor issue. The issue I discuss in THIS thread is specifically about the minimum contrast 1.4.3, and is not minor as it has far-reaching consequences such as a ton of apps that now incorrectly present colors as "accessible" when in fact they are not. And this issue has led to a great deal of misunderstanding regarding color choices and contrast.

The thread you linked to deals with a minor error in the relative luminance equation, and while the W3C picked the wrong equation that is not what is causing the much more serious problem that I outline in this separate issue.

The issue HERE is using a simple contrast (L1/L2) on linear luminance to define color & luminance contrast. But this does not provide any meaningful value for perceived contrast.

I posted this as an issue for discussion while I continue my research (search for an accurate programatic contrast assessment) and pull request separately.

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 15, 2019

Some additional thoughts regarding 1.4.3

FWIW MY BACKGROUND: I work in the film and television industry in Hollywood as an editor/colorist/VFX & Title Supervisor. I work with color and visual perception issues every day.

I am going to continue to post in this thread while I delve further into this before generating a pull request. It is a concern for me because this W3C document is considered authoritative, and has made its way into government regulations. It is important that it be correct, and it is not at present. PLEASE COMMENT if you have thoughts or insights as to why some of these choices were made. Thank you.

On Terms:

Simple contrast "is not useful for real-world luminances, because of their much higher dynamic range and the logarithmic response characteristics of the human eye."[1]

and using simple contrast seems to have led to the higher (4.5:1) contrast specifications:

What is the cite and specific justification for the claimed need for a 4.5:1 contrast ratio? Studies by Legge, Rubin, Bangor etc. found that "Contrast by itself had no significance for either vision group." [unimpaired or impaired] [2]

However font size and polarity are very important, and contrast does interact with very small font sizes to a degree, especially in negative polarity. The Bangor study indicated that font sizes below 18 px resulted in a need for increased contrast, but these study participants were ether legally blind (20/200) or very impaired (20/100).

It is NOT about contrast as much as size and possibly polarity. While it is true that increasing contrast can help legibility for small fonts for visually impaired, increasing the font size offers a better improvement.

As I mentioned in my first post Weber of Michaelson are possibly better here, and one of those are what is used in nearly all the research & standards. However, I am working with Bartleson-Breneman perceptual contrast at the moment.

To make this point more clear: The "simple" ratio of #FFF to #808080 is 4.6:1 (3.95:1 if you add in the W3C's 5% bonus luminance). But #808080 to #040404 is a ratio of 178.88:1 (5:19:1 using the 5% extra).

#FFF is luminance 100, mid-grey #808080 is 21.59, and #040404 is 0.12

So ignoring the oddly-applied/misapplied "flare" value, white to mid grey is a ratio of 100:21.6 and mid grey to black is a ratio of 21.7:0.12

BUT because the first much smaller ratio is also associated with high luminance it is much easier to read and has much better PERCEPTUAL contrast:

4.6:1 (3.95:1)
Screen Shot 2019-04-14 at 5 49 56 PM

And the black one with 179:1 contrast (LOL, 5.19:1 with the cheat)
Screen Shot 2019-04-14 at 5 49 28 PM

I find no justification for the 4.5:1 contrast ratio for 20/40 vision as indicated in the W3's standard. Is it set that way (along with the excessive flare luminance add) to attempt make up for the other deficiencies?

See also reference [3] below, a EU paper on this subject.

Contrast Sensitivity is a separate measurement from visual acuity. From the referenced Arditi paper: "visual acuity measurements alone are insufficient to characterize basic spatial visual function..." But I don't see where you get multiplying the ISO well established 3:1 standard by 1.5 ?? Looking at acuity vs contrast graphs is see a difference in CS as low as 5% for a 20/40 person. And as I recall the common 3:1 luminance contrast ratio included near normal vision (20/40 is near normal).

Here's a graph, (for reference logMAR 0.3 is approximately 20/40.)

Screen Shot 2019-04-14 at 4 33 57 PM

In short, it appears to me the 4.5:1 contrast standard is somewhat arbitrary, and there are other more important means to improve accessibility, namely font size, appropriate polarity, and total luminance.

LUMINANCE:
Screen Shot 2019-04-14 at 2 49 53 PM

NITS TO THE RESCUE! (by nits I mean cd/m^2, 1 cd/m^2 is 1 nit ... but maybe I also mean ME, nit-picking on this issue, LOL).

The sRGB spec states an 80 nit monitor, however people commonly adjust them to 120 nit to 160 nit, even more (300+ is common. Some phones do 1200). If the monitor is brighter, and the material is black text on white, the light from the monitor results in pupil contraction which improves perceived sharpness.

I'll opine that it is more important to have a monitor that is adjusted bright enough for its environment. In fact it would be a good idea to lobby the ISO for an amendment to the sRGB spec to adjust away from 80 cd/m2 to a specific luminance based on the environment. 1996 was a long time ago, and display technology has changed substantially — we shouldn't have to adjust the ROOM lighting to match the monitor, it's easier to adjust the monitor. A standard stating the max display luminance for a given ambient light would go a long way toward real accessibility/accommodation.

SUMMARY:

The main thing I am lobbying for here is a revised programatic contrast assessment that is perceptually correct. But as I research this, I see there are other concerns that should be considered.

  • New contrast assessment method, with a revision of the standard to accurately model perceptual needs. I.e. no simple ratios, etc.
  • Among the contrast revisions, provide guidance on excessive contrast — too high of a contrast causes reading problems and eye strain as well.
  • Add guidance on display polarization (light on dark vs dark on light) to the standard.
  • Along with polarization, guidance for local adaptation which is part of the issue with current contrast assessments. (Local adaptation is one reason that white on a darker background looks more contrasty than grey on a dark background even if the luminance ratio is the same).
  • Emphasize font size for accessibility, with guidance to avoid ever overriding the user agent's root font size.
  • Lobby ISO to amend sRGB to provide a standard for adjusting a display to the room. 80 nits is really a bit dim in a lot of environments.

Thank you for reading. I hope to have a solid contrast assessment model this week.

Andy

Refs:
[1] https://www.schorsch.com/en/kbase/glossary/contrast.html
[2] https://pdfs.semanticscholar.org/4f9f/f4bcbc0eb8d1040228e5d84cd0c0e75962c7.pdf
[3] https://www.anec.eu/images/Publications/technical-studies/ANEC-final-report-1503-1700-Lenoir-et-al.pdf

Edited May 22 for typos and some clarity.

@mraccess77

This comment has been minimized.

Copy link
Contributor

commented Apr 15, 2019

Hi @Myndex

  • I agree that there are some combinations that pass that should fail and that there are some situations that fail that should pass. However, in general the algorithm works well and has made a big difference to ensure that text has better contrast.
  • While I agree that lower acuity doesn't necessarily mean less contrast sensitivity -- once you get into the 2070 20200 levels folks tend to have eye conditions with multiple factors and there is a larger correlation between the two
  • 20/40 may not be visually impaired at least by most US standards. 20/70 is generally considered visually impaired by many and I believe the criteria were written to help many of us who use vision in the range of 20/70 - 20/400. For someone with 20/200 contrast very much tends to be an important issue and these guidelines were written for folks in this range. 4.5 is very much a tipping for point for many of us and I do not agree that 4.5 is too high for someone who is visually impaired or legally blind. Many legally blind people use their vision and require this type of minimum contrast. So if the understanding text is not clear around this area then we need to update it.
@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 16, 2019

Hi @mraccess77

Thank you for commenting, it helps me to see when I am not explaining or describing completely. But perhaps more important it leads to some new discoveries. In answering your post I did some experiments that add insight. More below.

First, just to provide a little more background, I want to mention that I have personal experience with 20/200 vision. Several years ago (in my late 40s) I developed early onset cataracts which brought my vision to worse than 20/200 in one eye, the other diminished a bit less. I now have IOL implants, but those surgeries caused vitreal detachments and retinal detachments, requiring a vitrectomy in one eye, and in the other, continuing issues due to large vitreal floaters that still can interfere with reading. Also, I still need glasses (i.e. it is trivial for me to remove them to introduce poor vision).

As such WCAG is a topic I have a close personal interest in.

As I mentioned in an above post, I am an imaging professional in Hollywood with a career that spans decades and a background that includes broadcast engineering, colorist, VFX Supervisor, and perhaps most relevant, title designer.

I came across this WCAG issue while developing a CSS framework. For the color module I am trying to create a simple color subset that is both easy to read and aesthetically pleasing. This led me down the color and vision theory rabbit hole, where I stumbled on a contrast calculator and saw an odd comment by the coder who mentioned that his "calculate" button did not pass the WCAG standard. The button is completely readable with more than adequate contrast, thus started my present research journey.

I have since been doing nothing but research this issue in depth. My posts here are based on that research and my extensive experience in digital imaging.

  • mraccess77 said: I agree that there are some combinations that pass that should fail and that there are some situations that fail that should pass. However, in general the algorithm works well and has made a big difference to ensure that text has better contrast.

I cannot agree here. Estimating roughly, 40% of the color pairs the WCAG math calls "PASS" are poor in quality and should fail. And somewhere in the area of 51% of colors they fail could conceivably pass.

Wrong nearly half the time is one huge fail. I believe it is the result of incorrect assumptions and cherry picking various bits of standards and cobbling them together into something that is truly inaccurate and unsuitable for the purpose.

I'm going to quote Whittle from his paper [1] (emphasis added)

With regard to the mathematical description, an important distinction is between ratios and contrast expressions that also involve differences. This concerns what is meant by ‘contrast’. Weber contrast and Michelson contrast are the commonest expressions for it, not the simple ratio L/Lb. Weber contrast = (L- Lb)/Lb, usually written ÆL/Lb. When the backgrounds are weak a ‘dark light’ constant must be included: ÆL/(Lb+L0). Michelson contrast = |ÆL|/(L+Lb). When people talk of contrast coding in the early stages of the visual pathway, they usually mean that the firing rate of sensory neurons is a function of, perhaps proportional to, Weber contrast or Michelson contrast. Such a code combines differencing (calculating L-Lb) with normalisation (attenuating by some function of the absolute stimulus level).

And then from Pelli's paper [2] (emphasis added)

The contrast of the target quantifies its relative difference in luminance from the background, and may be specified as Weber contrast Lmax−LminLbackground, Michelson contrast Lmax−LminLmax+Lmin, or RMS contrast LσLμ, where Lmax, Lmin, Lbackground, Lμ, and Lσ are luminance maximum, minimum, background, mean, and standard deviation, respectively. Weber contrast is preferred for letter stimuli, Michelson contrast is preferred for gratings, and RMS contrast is preferred for natural stimuli and efficiency calculations (Bex & Makous, 2002; Pelli & Farell, 1999). Threshold contrast is the contrast required to see the target reliably. The reciprocal of threshold is called sensitivity.

The WCAG ignores this wholesale, despite it being prominent in most research and even noted in the ISO and ANSI standards. WCAG is using simple contrast (Lw/Lk) and that is one of the errors.

Among my findings, some of the sites I have the most difficultly reading are "compliant" with the WCAG. There are countless contrast checkers and other automated or semi automated accessibility checkers that use the algorithm as written, and they all fail to detect poor perceptual contrast.

And there are a LOT of combinations that pass but are hard to read. An accessibility site has this on one of their pages indicating the problem:

Screen Shot 2019-04-13 at 1 29 03 PM

The contrast checkers are all using this flawed model which ignores many important aspects of perception. The results are all over the place and inconsistent. I can see why designers are ignoring the contrast recommendations: they are ambiguous and inconsistent at best. At the moment, visual judgement does a better job than the contrast calculators.

I have a page where I am conducting live experiments in this regard, aloing with some commentary on my findings as I go along. This link is:
https://www.myndex.com/WEB/W3Contrastissue

Here are some examples from today's experiments:

EDIT: The image below was changed (4/16) as the previous version was scaled wrong compered to the following image in this post.
Screen Shot 2019-04-16 at 12 34 40 PM

Today, I've been looking into luminance — it is well known and researched that increasing luminance improves readability. One thing the WCAG lacks is a specification on minimum lightness for the lightest element in a color pair.

Screen Shot 2019-04-15 at 1 39 19 PM

I have more on this on the page, but as you can see, setting a minimum lightness of the lightest element to #AAA results in a consistent, readable, block of text. On the page you'll see examples where pages with 3:1 contrast and minimum #AAA on the brightest of the pair is more readable than 4.5:1 contrast on darker pairs.

  • mraccess77 said: While I agree that lower acuity doesn't necessarily mean less contrast sensitivity -- once you get into the 2070 20200 levels folks tend to have eye conditions with multiple factors and there is a larger correlation between the two.

Yes, I am aware, there is definitely correlation to contrast sensitivity due to a number of vision impairments. Indeed, one can have good visual acuity and bad contrast sensitivity. The WCAG does not discuss CS though, and only lists some Snellen numbers like 20/40.

I was talking mainly of 20/40, which is what the WCAG talks about regarding AA. For the portion of the standard that relates to profoundly impaired, still the math they provide does not create useful numbers for guidance.

And that is the point I am getting at. The math is essentially wrong. Lw/Lk is not well suited for determining contrast in this context. The vast majority of research on contrast sensitivity uses WEBER or MICHELSON or both. Rarely simple contrast. But there are other math mistakes in WCAG related to sRGB and computer displays that also need to be addressed.

(EDIT by Andy: May 2019: My recent experiments and research indicates that a "classical, unmodified Weber contrast" is really not "substantially" better than the WCAG math, though there is a modified Weber from Hwang/Peli that is much better than the WCAG math, and other more modern contrast equations such as PCL).

  • mraccess77 said: 20/40 may not be visually impaired at least by most US standards. 20/70 is generally considered visually impaired by many and I believe the criteria were written to help many of us who use vision in the range of 20/70 - 20/400. For someone with 20/200 contrast very much tends to be an important issue and these guidelines were written for folks in this range. 4.5 is very much a tipping for point for many of us and I do not agree that 4.5 is too high for someone who is visually impaired or legally blind. Many legally blind people use their vision and require this type of minimum contrast. So if the understanding text is not clear around this area then we need to update it.

I never said that 4.5:1 is too high, particularly in regards to profound impairment, 4.5:1 is TOO LOW when using the WCAG math. WCAG indicates 7:1 for the more visually impaired (AAA). And yes, the "understanding" text is all over the place and not clear.

To be clear, I am not "condemning" the 4.5:1 ratio per se, but I am questioning where it was derived from and the basis of the equations when those equations are not supported by standards nor research. I am also pointing out that luminance is a much bigger factor than contrast yet it is not mentioned, nor is local adaptation unless I missed seeing that. My statements here are from published research as well as my own research.

The California Council of the Blind (Lozano, 2009) states, and Federal ADA guidelines also state, that contrast for signs be 70% (note it is a percent not a ratio). The math used for the Federal standard is (B1-B2)/B1 (B1 is the lighter LRV, and B2 is the darker).[3] Now, if I use the WCAG math, 70% equals a ratio somewhere around 2.3:1 to 3.2:1. WCAG math is all over the place and does not relate to Weber's law nor anything else useful.

Nevertheless, California Council of the Blind (Lozano, 2009)[4] is on record stating that the Federal equation is flawed when B1 is less than 45. I just came across this a minute ago — I'm slightly amused as it is closely mirroring what I have been saying about the WCAG.

I describe this in more detail on the experiments page, and there are more examples.

In closing I just want to say that simply switching the equation to Weber is not the complete answer. I think we can do better, and that is the focus of my research.

Thank you again for the comments.

Andy

REFS:
[1]http://aardvark.ucsd.edu/color/whittle.pdf
[2]https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3744596/
[3]https://segd.org/sites/default/files/SEGD_2012_ADA_White_Paper_Update.pdf
[4]https://www.documents.dgs.ca.gov/dsa/access/Pt2_Final-SOR.pdfHi @mraccess77

Edited May 2019 for some minor clarity fixes.

@mraccess77

This comment has been minimized.

Copy link
Contributor

commented Apr 16, 2019

Hi @Myndex I appreciate your efforts in addressing the shortcomings of the current algorithm. In your examples, I personally found some of the first 4.5 items easier to read than the ones with values greater than #AAA -- thus I know there will always be differences in interpretation by different people as we all see differently. Along those lines with the adaption you were discussing, halos may technically be used to meet the requirement but when you take into account the width of the stroke and surrounding colors haloed text can actually be harder for me to read. I agree that we want more people to use contrasting colors that meet users needs and if we can change the algorithm to meet those needs without lessening it get more adoption that would be a good thing. Personally, I see these changes as something that can't be changed with the current standard as the method is too normative to change with an errata but would be a great opportunity to address for the next version of the accessibility guidelines (silver). It would be good to socialize this with some other folks such as Jared Smith from WebAIM who also would like to change the future direction of the contrast calculations and the Low Vision Accessibility Task Force which is part of the Accessibility Guidelines Working group. Adding @WayneEDick and @allanj-uaag.

@bruce-usab

This comment has been minimized.

Copy link

commented Apr 16, 2019

Thanks @Myndex for writing this up so thoroughly!

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 16, 2019

Hi @mraccess77

Hi @Myndex I appreciate your efforts in addressing the shortcomings of the current algorithm. In your examples, I personally found some of the first 4.5 items easier to read than the ones with values greater than #AAA.

If that is based on the images in my post above, I should mention that the SIZE of the first set is 30% larger, and therefore easier to read (I just realized the scaling error due to how this site handles images as I looked at the post, post edited to correct).BUT ALSO, three of the first 5 have the brightest color well above #AAA. For an apples to apples comparison, please see the live experiment on the website: https://www.myndex.com/WEB/W3Contrastissue
Text scaling on the site for the samples is equal in each group (the first group is general contrast assessments, and the second is body text).

thus I know there will always be differences in interpretation by different people as we all see differently. Along those lines with the adaption you were discussing, halos may technically be used to meet the requirement but when you take into account the width of the stroke and surrounding colors haloed text can actually be harder for me to read.

Indeed, for instance, research shows that most people do better with dark text on a light background (Positive Display) but with my vision, I much prefer light/colored text on a black background (Negative Display). Right now I am having difficulty with THIS site due to the bright background (L* 98 in the text area) yet for most people this is the ideal presentation.

I agree that we want more people to use contrasting colors that meet users needs and if we can change the algorithm to meet those needs without lessening it get more adoption that would be a good thing. Personally, I see these changes as something that can't be changed with the current standard as the method is too normative to change with an errata but would be a great opportunity to address for the next version of the accessibility guidelines (silver).

Yes I see it was just tagged as WCAG 2.2 which I was somewhat expecting. Correcting the algorithm also means changing the standard, as far as I can tell the current standard(s) seem to be compensating for the math issues — I don't know the complete history, but that is how it appears based on the reverse engineering/analysis.

For an errata, it might be useful to place a note to the effect of: "Current contrast algorithms may overvalue contrasts with pairs of darker colors. Designers should be cautioned not to rely on contrast numbers in these cases/"

I should note that problems and controversy on this very subject are visibly present in the research and some standards. It is partly why I am being so proactive here. I hope to cut through the clutter to bring some clarity (puns more or less intended).

It would be good to socialize this with some other folks such as Jared Smith from WebAIM who also would like to change the future direction of the contrast calculations and the Low Vision Accessibility Task Force which is part of the Accessibility Guidelines Working group. Adding @WayneEDick and @allanj-uaag.

Excellent. What are the deadlines for 2.2? As I mentioned in one of my posts while there is much research on simple monitor displays (i.e. black on white, white on black), there is not much in the way of research on complex, graphically rich content (that I've found anyway). I'm thinking some empirical studies would be illustrative.

ON ALGORITHMS:
Weber has been the "gold standard" for text contrast. But I am presently leaning toward a modified version of Perceptual Contrast Length (PCL) which "essentially" uses a modified L* type curve (perceptual lightness) in calculating contrast. I believe I have a way to adapt it so the contrast results are similar to what people are used to using (i.e. 3:1 etc).

Another idea is using the difference between a brighter/darker L* values (as in CIE L* a* b*).

Those are all fairly simple models for contrast determination. A more advanced approach is a true color or image model like CIECAM02 or ICAM. ICAM is the work of Mark Fairchild at Rochester Inst. of Technology. A model like that could (I believe) analyze an overall page, as opposed to a pair of colors in isolation.

On the experiments page there is an example of local adaptation issues do to surrounding colors. But here's a quick example:

LocalAdaptation

The blue text on grey is WCAG 4.5:1 contrast, and both bits of text are identical. But the one centered on black is more readable because the black allows local adaptation to the darker colors. So among other things, minimum padding for elements against a high contrasting color are important.

(I'll mention in passing that pair of blue on grey is a fail in my modified PCL algorithm).

And this is where things are more complicated than a simple contrast — web content is graphically rich. Text on a background may be a pass, but if it is far different than the overall page, adaptation will affect perceived legibility.

Thank you again,

Andy

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 16, 2019

Thanks @Myndex for writing this up so thoroughly!

Thank you @bruce-usab — I consider this a particularly important issue, partly because W3C standards are used not just for web but for app design and other applications as well. Because it is a freely distributed standard, it has a very wide reach. Myself I spend 12 hours a day in front of monitors, while that may be more than average displays are certainly integral to the lives of so many — we are inseparable from our technology — standards like this have a very real affect on people's lives. This standard in particular has become part of government regulations, for instance.

What is the timeline/deadlines for 2.2? I'm hoping to have some candidate contrast models soon, but also thinking there is one giant rabbit hole to crawl down considering how page complexity affects perception (adaptation) etc.

Thank you!

Andy

@WayneEDick

This comment has been minimized.

Copy link

commented Apr 17, 2019

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 17, 2019

There is a mathematical problem with this discussion line. L1 and L2 are computed using weightings that take color receptivity into account. R, G, and B have distinct weights in the relative luminance formula.

Hi @WayneEDick , thank you for commenting. I’m away from the studio, on location, so I can’t comment in depth, but:

Yes, luminance is spectrally weighted. However luminance is a linear measure of light.

Light is linear (additive) but human vision is NOT linear (essentially a power curve). So while the sRGB coefficients adjust for spectral sensitivity, luminance is NOT relative to PERCEPTION of lightness. L* is perceptual (CIELAB), and gamma encoded transfer curves are somewhat perceptual (such as luma, the Y‘ of Y‘IQ) but not luminance (Y).

But that’s not even the most relevant part. L1/L2 is called “simple contrast” and it is wrong in this context. The “standard” for contrast for TEXT is Weber, which is based on ∆L/L typically as
(LMax-Lmin)/Lbackground or (LMax-Lmin)/LMax
And is in fact still using linear luminance. But there are some better more modern implementations now such as Perceptual Contrast Length (PCL) which essentially converts to a modified L* (perceptual lightness).

This issue came to my attention when I saw the contrast equation was wrong (as is the sRGB threshold WCAG lists), as I have outlined in my posts above. But now that this is being discussed, we can do better. I am currently investigating PCL and other methods.

For further details, I suggest Charles Poynton’s GAMMA FAQ and COLOR FAQ. Here’s a link: https://poynton.ca/GammaFAQ.html
but please feel free to ask me any other questions.

-Andy

@bruce-usab

This comment has been minimized.

Copy link

commented Apr 17, 2019

@Myndex, you asked:

What is the timeline/deadlines for 2.2?

The formal/approved Project Plan has a goal of this time next year for the first public working draft.

I'm hoping to have some candidate contrast models soon, but also thinking there is one giant rabbit hole to crawl down considering how page complexity affects perception (adaptation) etc.

I am not optimistic about the chances for wholesale replacement formulas for 2.2. That is possible for 3.0.

This standard in particular has become part of government regulations, for instance.

Yes. I am one of the actors in helping that happen.

There was some user testing associated with the validation of the 2.0 formula. I could not quickly find a cite for that. My recollection is that the hard data pointed to a ratio of 4.65:1 as a defensible break point. The working group was close to rounding that up to 5:1, just to have round numbers. I successfully lobbied for 4.5:1 mostly because (1) the empirical data was not overwhelmingly compelling, and (2) 4.5:1 allowed the option for white and black (simultaneously) on a middle gray.

I am sorry to say that I will offline for the next ten days or so, but I will be circling back to this!

@bruce-usab

This comment has been minimized.

Copy link

commented Apr 17, 2019

@Myndex, this one assertion leapt at me:

The California Council of the Blind (Lozano, 2009) states, and Federal ADA guidelines also state, that contrast for signs be 70% (note it is a percent not a ratio). The math used for the Federal standard is (B1-B2)/B1 (B1 is the lighter LRV, and B2 is the darker).[3]

That formula was only ever intended for reflective light, not luminescence. It was promulgated in the 1991 ADAAG and was sufficiently problematic that is was dropped in the 2004/2010 ADAAG/ADAAS.

Your citation [3] clearly states (more than once) that 70% “is no longer a requirement”.

@alastc

This comment has been minimized.

Copy link
Contributor

commented Apr 17, 2019

Hi @Myndex,

Just upfront - I strongly suggest we come to a resolution on this issue before you spend time creating a PR.

estimating roughly, 40% of the color pairs the WCAG math calls "PASS" are poor in quality and should fail. And somewhere in the area of 51% of colors they fail could conceivably pass.

This doesn't match my testing with people over the years. Not a large scientific study, but 100s of tests (since the early 2000s) with people with low-vision. Whenever there was a color combination that people struggled with it virtually always failed on the contrast level checks.

I've also found there are huge differences between people and the particular colors that were an issue for them. E.g. Some participants couldn't see a strong pink on white, which others couldn't ignore as it was so intense.

Broadly I think the context that you need to account for is what the guidelines are for, and how they are used. A method to measure contrast for the web content accessibility guidelines needs to:

  • Work across many types of visual impairment. Low vision is a general category, various forms impact color perception in different ways.
  • Work across many devices. As a web designer your work will be viewed on cheap & expensive phones, expensive Macs (or Chomebooks), cheap Chromebooks (or Windows laptops), TVs, projectors etc. Therefore it cannot account for the brightness of the display, let alone the surroundings. It has to be a measure from the way the web page is setup.
  • Be simple to test. I.e. a web designer needs to be able to test their own work easily.

A lot of the factors you added in the summary above cannot be accounted for in a web standard (e.g. display polarization, nits).

Also, at least some of the examples you created have the same 'background bias' effect I mentioned here, perhaps you know the name for that effect? I.e. having a different general background behind the area of interest affects the perceived readability. Reading on, I guess this is the 'local adaptation' issues?

In short, I don't think there is such as a thing as a "revised programatic contrast assessment that is perceptually correct", but I'd love to be wrong. A change would need a lot of real-world testing to ensure it provides better results.

What is the timeline/deadlines for 2.2?

Given the scale of change this would require (including the research), I suspect it would be a 3.0/Silver type of thing to do.

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 18, 2019

@Myndex, this one assertion leapt at me:

The California Council of the Blind (Lozano, 2009) states, and Federal ADA guidelines also state, that contrast for signs be 70% (note it is a percent not a ratio). The math used for the Federal standard is (B1-B2)/B1 (B1 is the lighter LRV, and B2 is the darker).[3]

That formula was only ever intended for reflective light, not luminescence. It was promulgated in the 1991 ADAAG and was sufficiently problematic that is was dropped in the 2004/2010 ADAAG/ADAAS.

Your citation [3] clearly states (more than once) that 70% “is no longer a requirement”.

This is the section in reference [3] I was referring to (I did not read the entire document, I was mainly pointing out the continued controversy and unsettled nature of the issue):

Screen Shot 2019-04-17 at 9 56 09 PM

Historical and current studies of contrast sensitivity it is typically about 1% to 1.6% over a wide range from 7 or 8 cd/m2 to over 500 cd/m2. NASA also found that under 8 cd/m2, contrast sensitivity fails increasingly.

But on the subject of reflected light vs emitted light: both can be measured in luminance. Luminance is proportional to both illuminance and reflectance.

And this is one of the HUGE ENORMOUS PROBLEMS facing us in the present conversation, I have seen two COMPLETELY DIFFERENT definitions of LRV. The correct definition of LRV is based on luminance (Y or L) which is linear light, yet some sources state it is based on lightness (L* as in CIELAB, L* a* b*) which is perceptual lightness NOT linear light.

YIKES. It appears this stems for the error in the 1991 ADAAG, which from what I have been reading was using Weber on L* and not Luminance?

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

Hi @Myndex,

Just upfront - I strongly suggest we come to a resolution on this issue before you spend time creating a PR.

Hi @alastc I agree and said as much in one of my posts, it is why I am posting an issue instead of a pull request first.

estimating roughly, 40% of the color pairs the WCAG math calls "PASS" are poor in quality and should fail. And somewhere in the area of 51% of colors they fail could conceivably pass.

This doesn't match my testing with people over the years. Not a large scientific study, but 100s of tests (since the early 2000s) with people with low-vision. Whenever there was a color combination that people struggled with it virtually always failed on the contrast level checks.

That's good to know, though I am concerned about the large number of sites I encounter that pass the test yet I find very hard to read. I am less concerned with false fails and more concerned about the false passes in other words.

I've also found there are huge differences between people and the particular colors that were an issue for them. E.g. Some participants couldn't see a strong pink on white, which others couldn't ignore as it was so intense.

A "strong" pink on white should have a fairly low luminance contrast, and should fail with proper math though the WCAG math might pass it when it should fail in some cases. The problem with hue is how people with color deficient vision rely on luminance contrast. But also, a light background changes perception of text & contrast vs a dark background. I'm wondering how those who had a hard time with pink on white would have seen the white on pink.

Broadly I think the context that you need to account for is what the guidelines are for, and how they are used. A method to measure contrast for the web content accessibility guidelines needs to:

  • Work across many types of visual impairment. Low vision is a general category, various forms impact color perception in different ways.

Yes the main issue is adequate luminance contrast. A useful tool for designers might be a tool that captured a website and converted it to greyscale based on luminance so the designer could see the luminance contrast without being influenced by hue.

  • Work across many devices. As a web designer your work will be viewed on cheap & expensive phones, expensive Macs (or Chomebooks), cheap Chromebooks (or Windows laptops), TVs, projectors etc. Therefore it cannot account for the brightness of the display, let alone the surroundings. It has to be a measure from the way the web page is setup.

Cheap or expensive, displays are built to sRGB standards, and often with better brightness. But not the point, as the eye adapts to various conditions of light. What is important is to consider how adaptation affects readability (more on that below).

  • Be simple to test. I.e. a web designer needs to be able to test their own work easily.

Most that I am discussing is simple to implement. (It's not harder to use correct math, for instance).

A lot of the factors you added in the summary above cannot be accounted for in a web standard (e.g. display polarization, nits).

It can — when I talk of polarization, I talk specifically of web DESIGN. light text on a dark background is "negative polarization" (or confusingly, positive contrast) and vice versa.

As for nits (cd/m^2) I'm not saying the web standard should specify any particular "absolute" luminance output, but the standard IS already trying to take environment into consideration

Also, at least some of the examples you created have the same 'background bias' effect I mentioned here, perhaps you know the name for that effect? I.e. having a different general background behind the area of interest affects the perceived readability. Reading on, I guess this is the 'local adaptation' issues?

Local adaptation, and adaptation in general, need to be part of the design considerations. Dark text on on a grey background may pass via the math, but if the grey background is a div with no padding on a white background, they eye adapts to the white making the dark text on grey hard to read. There is a demonstration of this on the experiments site.

In short, I don't think there is such as a thing as a "revised programatic contrast assessment that is perceptually correct", but I'd love to be wrong. A change would need a lot of real-world testing to ensure it provides better results.

There are definitely better choices than using incorrect math, which is the current state. And while vision and perception are complicated, it is also mostly academic in terms of things like contrast. There is a wealth of research on vision perception and contrast over the last several hundred years that can and should be used to guide this standard. In other words, yes there is such a thing as "programatic contrast assessment that is perceptually correct." It's just a matter of implementing it.

There may be added challenges in modern webpages due to the graphically rich content AND the variety of environments due to mobile devices. But the W3C provides the standards and guidelines not just for web design but for browser software.

(edited for spelling and some clarity issues)

@alastc

This comment has been minimized.

Copy link
Contributor

commented Apr 19, 2019

False fails/passes & ‘incorrect math’: Yes there is lots of research, but any model using Mathematics is a mapping of how light is measured to how it is perceived.

There are individual differences in perception, so there cannot be a perfect model (that’s what I meant by ‘no such thing’). Otherwise the pink example wouldn’t vary by person.

A different model may improve the fit across a range of visual impairments, but it is not an absolute right/wrong. One model will not fit everyone perfectly, and we should be optimising for people with visual impairments rather than the general population.

If there is a better industry standard model to use for measuring contrast, great, let’s test it across a range of people.

Secondary notes

There are already tools for greyscaling a screenshot, but we need to be able to assess text (and certain graphics) and show a pass/fail individually.

Web content can be defined to use color spaces other than sRGB, but we are planning to standardise testing of contrast to sRGB as a lowest common denominator.

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

I am not optimistic about the chances for wholesale replacement formulas for 2.2. That is possible for 3.0.

That makes sense, there are probably some incremental changes that might be helpful as well as "leading a path" to a larger change.

This standard in particular has become part of government regulations, for instance.

Yes. I am one of the actors in helping that happen.

Ah excellent! However, that also means that the standard needs to be solid and unimpeachable. I'd like to help to get to that point.

There was some user testing associated with the validation of the 2.0 formula. I could not quickly find a cite for that. My recollection is that the hard data pointed to a ratio of 4.65:1 as a defensible break point.

Hmmm. I'd love to see this data. I believe you that the ratio from the data is higher that other standards as the equation being used overstates the contrast ratio in addition to being perceptually incorrect.

The working group was close to rounding that up to 5:1, just to have round numbers. I successfully lobbied for 4.5:1 mostly because (1) the empirical data was not overwhelmingly compelling, and (2) 4.5:1 allowed the option for white and black (simultaneously) on a middle gray.

I'm not sure it does, as written the equation is overstating contrast for darker colors. It appears the equation does not take system gamma gain into account, nor the floor of 8 cd/m2 in terms of minimum luminance for contrast (NASA). More discussion to come on these issues.

QUESTION: it would be helpful to get online access to certain ISO standards, as well as papers used in the current specification — is that possible?

Thank you!

@patrickhlauke

This comment has been minimized.

Copy link
Member

commented Apr 19, 2019

Ah excellent! However, that also means that the standard needs to be solid and unimpeachable. I'd like to help to get to that point.

Just for context though, the formula was already in WCAG 2.0, and that's been out since 2008 https://www.w3.org/TR/WCAG20/ ... so just a word of warning that it's something deeply enshrined and not something that can be changed quickly or easily. It would take a few years at least...

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

Just for context though, the formula was already in WCAG 2.0, and that's been out since 2008 https://www.w3.org/TR/WCAG20/ ... so just a word of warning that it's something deeply enshrined and not something that can be changed quickly or easily. It would take a few years at least...

HI @patrickhlauke, Yes, I do realize this and recognize the issue. I'm certainly not expecting any overnight major change! As I mentioned in one of my posts, I am looking at potential incremental changes that can lead to a more solid solution.

At the same time there are a lot of other related standards that use different models and compliance parameters. Nearly all of them are using Weber, but there are newer more useful models emerging.

I mentioned some of the reasons I'm motivated for some positive changes here — and to be clear, my intent is to assist in finding easy and workable solutions to the issues I've outlined, and perhaps others.

As CSS, Java, HTML develop into greater feature sets, I've noticed a disturbing trend toward sites that are "fancy but less useable." So much so that many browsers now have the reader view to turn off all the crap!!!

@WayneEDick

This comment has been minimized.

Copy link

commented Apr 19, 2019

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 20, 2019

There are individual differences in perception, so there cannot be a perfect model (that’s what I meant by ‘no such thing’). Otherwise the pink example wouldn’t vary by person.

Pink/white relates to hue contrasts. The more perfect accepted model is luminance contrast as it's connected to contrast sensitivity. CS threshold is 1%-1.6% over the wide range of 8 cd/m2 to over 500 cd/m2, as shown in study after study, including many visual impairments. There are of course some impairments that directly affect CS/CSF, but contrast sensitivity is separate from visual acuity.

Visual acuity is helped more by size than contrast. perceived contrast is more complex than a ratio between two colors as it is substantially affected by adaptation, local adaptation, chromatic aberation, and other issues.

CHROMATIC ABERRATION: So this relates to an optical issue with any lens system, including human eyes. Light at different wavelengths are "bent" differently through a prism, which is why a prism creates a "rainbow". Lenses are a form of prism, and blue light through a lens lands in a different spot than red light as a result. (It is theorized that this is why our eye evolved to have red cones in the center and blue cones on the periphery). But this is a reason that red (#F00) and blue (#00F) look wacky together - the shared red/blue edge focus to different places on the retina. The hot pink you mention is #FF00FF - so red and blue with no green, the red portion of the text edge focusing differently than the blue edge. So for instance, working with colors that have a high blue content needs care, as NASA discusses:

https://colorusage.arc.nasa.gov/blue_2.php

That NASA site covers a lot of related material, and it is all about user interface design considering adverse viewing circumstances.

If there is a better industry standard model to use for measuring contrast, great, let’s test it across a range of people.

The "standard' has been Weber the 1800s. There are better models now, and particularly as web pages are a "unique environment" in that they are displayed using certain standards, there are definitely better ways to assess perceptual contrast.

Secondary notes

There are already tools for greyscaling a screenshot, but we need to be able to assess text (and certain graphics) and show a pass/fail individually.

Okay, but as I have demonstrated and discussed, the ratio of two colors by themselves in isolation will not give you a complete answer.

Web content can be defined to use color spaces other than sRGB, but we are planning to standardise testing of contrast to sRGB as a lowest common denominator.

Hmmm, no you can't. The standard is still sRGB. the CSS 4 working draft does list additional working color spaces as something desired for future implementation, but that's a pretty horrible idea at today's level of technology. sRGB is ideal for 8 bit. Any larger colorspace and you start needing at least 10 bit pretty quick. I see talk of linear_sRGB or linear_Rec2020 - then you need at least 16bit_HALF(FLOAT). Double bit depth and you double data size, and pages are ALREADY overbloated and slow. And you'll NEVER see the benefits under typical ambient conditions and cheap devices.

To wit: bigger color spaces do absolutely ZERO to assist impaired vision, There is ZERO luminance contrast difference (and it's worse if you stay in 8 bit: A super-big space like ProPhoto is GARBAGE on 8 bit, and will provide WORSE contrast gradation (i.e. causes banding) due to the ginormous delta E errors. NOT TO MENTION the fact that ProPhoto uses IMAGINARY primaries, meaning that values like #00FFFF DO NOT EXIST in ProPhoto as something you can see.

Most mobile browsers and many desktop browsers still do not support any form of color management. sRGB is the standard, and is expected remain that way for the foreseeable future. The CSS tag for alternate colorspaces is not.

FOR ACCESSIBLE: 8 bit and sRGB (and Rec709) is the ideal standard at present technology.

Yes, there are some emerging color spaces like Rec2020 that are bound to make a difference someday but all these alternate color spaces have different transfer curves and different primary coordinates. Converting between spaces is computationally expensive, which is why most mobile browsers are NOT color managed and instead are sRGB "compliant".

I have high end $$$ wide gamut monitors (which are probably what caused my early cataracts), but those are rare — sRGB/Rec709 define nearly all distributed content be it Web or Broadcast worldwide, and in a non-color managed way. If you use monitors OTHER THAN sRGB/Rec709, then you MUST have color management to transform colorspaces, and that is computationally expensive. I discuss some of this in some articles I've written over the years reprinted here: https://www.generaltitles.com/helpfiles/13-q-a-blog/colorspaces-and-file-types
but still, Poynton's Gamma and Color FAQ are a good and easy read first: https://poynton.ca/GammaFAQ.html

Thank you for the comments!

Andy

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 20, 2019

Here are my questions. When you use the term spectral in the sense of functional analysis?

Hi @WayneEDick !

This is part of the CIE 1931 standard on luminance, the Y in CIEXYZ. The standard is spectrally weighted relative to the LMS cones (red green blue cones) that make up human trichromatic vision.

The coefficients 0.2126, 0.7152, and 0.0722 are part of the Rec709 standard for HDTV, and sRGB is derived from that standard. Both Rec709 and sRGB use the same color primaries and white point — the only practical difference is the transfer curve (effective gamma) is a little different between sRGB and Rec709, the reason being that Rec709 gamma is relative to a dark living room and sRGB is relative to a brighter office type setting.

Charles Poynton's Gamma FAQ is really the best crash course on this.[1]

While gamma is not a linear function it is differentiable and can be represented piecewise by line segments without?

Gamma is one form of a "transfer curve" to transform a particular color value from one colorspace to another. It is often represented "piecewise by line segments" as what we call a LUT (look up table). LUTs are very common in the film/television industry because the color represented in negative film is sufficiently complicated that it can't be accurately represented with a simple equation or matrix. 3D LUTs are used to create accurate transforms through various color spaces in the post production process.

Some color spaces like Adobe98, use a pure exponential transfer curve.

BUT ALSO: the sRGB and Rec709 transfer curves in their "correct" implementation use a combination of an exponential curve attaches the a linear region near black. The linear region has a number of purposes and motivations, including reducing camera noise near black and math issues with pure exponential curves near black.

Are you saying the W3C representation does not use enough line segments in it's approximations?

It uses "none" because luminance is linear, as in a straight line. Luminance has no gamma (or technically, the gamma is 1.0). Luminance is proportional to light, and light in the real world is linear.

The human eye is NOT linear, photopic vision has a gamma of around 2.4 to 2.5 (though vision is more complicated due to adaptation, scoptic (rod/dark night) vision, etc.)

This is the CIE L* curve

L* is based on human perception. Luminance (not shown) would just be a straight diagonal line from 0,0 to 100,100.
image

Or are you saying that gamma does not come into the equation?

Right now human perceptual contrast is not represented in the WCAG "Understanding 1.4.3."

Luminance is derived by first applying the reverse transfer curve to each of the R´G´B´components, then multiplying them by the coefficients, and then summing them for the total luminance (Y but sometimes shown as L but NOT to be confused with L*).

THEN they use a simple contrast ratio Yhi/Ylo or as they print it L1/L2. They also add 0.05 flare to each term. ((L1 + 0.05)/(L2 + 0.05)).

So here is the point I was getting at: the use of L1/L2 is only useful for very absolute black & white values, because it ignores a lot of what happens with perception of in-between values.

The "standard" math for contrast of TEXT is WEBER CONTRAST which uses the Weber fraction, which is ΔL/L — Weber has been around for a very long time, and most contrast standards and research are based on Weber or Michelson, not simple contrast. Simple contrast is used for example for the contrast of a monitor from maximum black to maximum white, but not for the in between values.

EDIT: Weber contrast is often stated as (Ybg-Ystim)/Ybg, but this can result in odd results. For monitors/displays, try (Ylightest- Ydarkest) / Ylightest

I am NOT saying that Weber is the ultimate solution, but it is what jumped out at me when I was investigating why web contrast calculators were presenting "weird" numbers relative to legibility. This led me on this path of "how did we end up here" which has now morphed into "what is the most useful modern perceptual contrast calculation."

Other notes on WCAG math: The sRGB conversion to luminance is using some incorrect values. The problem is minor and likely has little effect on the contrast issue, but I will show to the correct sRGB formula below.

Also, just FYI the coefficients must be applied only after the gamma is removed, but there is an interesting wrinkle here: even the "correct" luminance math does not account for system gamma gain. There is an additional 1.1 or 1.2 exponent applied to the signal by the monitor/display. This is common even in older systems like NTSC, which used a 1/2.2 exponent at the camera, but the CRT display was actually ~ 2.5 resulting in a system gamma gain at final display. Final display gamma can in fact be adjusted by the user with the monitor controls (that adds the uncertainty aspect to all of this). But I did notice that when I added a 1.2 exponent to the resultant luminance, it improved the perceptual uniformity of the resultant reported contrast (at least it seemed to, I have not run a real controlled study yet).

Finally what is your formula precisely including visual factors.

I suggest looking at Weber contrast, Michelson contrast (aka modulation), but also two modern methods that I am investigating and experimenting with, Bartleson-Breneman Perceptual Contrast Length[2], and one I just recently found that apparently is the basis for the Australian accessibility standards, the Bowman-Sapolinski Equation [3], though I;m not certain it can be used on CIE Y (Luminance). And then there are methods using L* (perceptual lightness from CIE L* a* b*) instead of luminance, in that case, it's not usually a ratio, but a difference (L*1background - L*2 foreground) as L* is perceptually uniform.

So for normalized values of 0 is min and 100 is max:

Y (luminance) 0 = L* 0 = sRGB 0 and Y 100 = L* 100 = sRGB 100
BUT
Y 18.4 = L* 50 = sRGB 46.7

This is because the perceptual halfway point between black and white is not a luminance of 50, but a luminance of 18.4 but on the perceptually uniform L* curve, the halfway point is 50. On sRGB its about 46.7 because as I mentioned earlier, sRGB has additional system gamma gain. Adding an expoinent of 1.102 to Y will put Y 18.4 at sRGB 50 for example (and I'm not saying that necessarily "should" be done, just that's how the math is for comparison).

I would like to analyze this. I am a mathematician. Sincerely, Wayne Dick PhD.

That is REALLY awesome to hear, I was hoping a mathematician would get involved. I have some planned experiments this weekend, I'll post more as I progress.

Note: the correct luminance calculation for sRGB -> D65 Y is:

From 8 bit R´G´B´, divide each channel R´G´B´/ 255 to get them 0-1, then
Screen Shot 2019-04-19 at 9 11 03 PM
This reverses the gamma resulting in linear RGB, then

R * 0.2126 + G * 0.7152 + B * 0.0722 = Y (D65 luminance)

For your cut and paste convenience, here is the gamma-to-linear portion from my OO spreadsheet:

=IF( R1 <= 0.04045 ; R1/12.92 ; POWER(((R1 + 0.055)/1.055) ; 2.4) )

ALSO if you are looking at other color transforms, we are only concerned with D65. Some CIEXYZ and L* a* b* transforms use a D50 whitepoint, which should not be a part of anything we are doing with monitor contrast, it's D65 only.

[1]https://poynton.ca/Poynton-color.html
[2]https://icdm-sid.org/downloads/idms1.html
[3] https://bcdg.hoop.la/fileSendAction/fcType/0/fcOid/330620847500650652/filePointer/330902322495002398/fodoid/330902322495002396/Color_Contrast_Scientific_Report.pdf
[4]http://www.brucelindbloom.com/index.html?Math.html

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 21, 2019

MORE USEFUL CONTRAST MATH

So today I came across this recent research at NIH (just a couple years ago) that directly states what I have been attempting to explain in the above posts. While they don't mention the WCAG, they do use the WCAG simple contrast ratio (CR) equation as a comparison to their modified Weber equation, including the WCAG's 5% ambient component.

The paper states specifically (emphasis added):

The contrast ratio (CR) based visibility predictions fail because it does not consider the function of the observer’s visual system. Human vision achieves high dynamic range through luminance (retinal) adaptation. As the overall scene luminance increases or decreases, the viewer’s adaptation level normalizes the target to background luminance difference, and perceives the same contrast (contrast constancy). For example, a large absolute luminance difference displayed under high overall luminance condition is perceived to have the same contrast as a lower absolute luminance difference displayed under low overall luminance condition. This characteristic is embedded in the Weber contrast definition.

NIH PAPER: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5489230/

They do modify Weber a little differently than I have, and their results are interesting and provide a further demonstration of the problems with "simple contrast" (CR). There are a couple small caveats I'll discuss after the summary. The paper is a short read, but here's a synopsis:

The WCAG contrast ratio (CR) is (Llight + 0.05)/(Ldark + 0.05)

The modified Weber is: (Llight - Ldark ) / (Llight + 0.05)

Hwang-Peli Modified Weber for Realistic Contrast for Monitors

CS Log plots of contrast versus ambient light:
WCAG Contrast plots

  • CR Plots suggest that when the display viewed indoor is moved outdoors under bright sunlight condition, the visibility reduction of the high contrast letter is much larger than the low contrast letter (the opposite of what happens in the real world).

  • CR Plots show patterns of contrast ratio change under the same changes in ambient luminance condition are different for the two contrast polarity cases, again opposite the real wolrd case.

Modified Weber log plots of contrast versus ambient light. Note that Weber is not a ratio, but a value ov 0 to 1 (which can be *100 and described as a percentage, like Michelson):
Mod weber log plots

  • The Weber contrast of target decreases rapidly to zero (0), as the target luminance approaches near to the background luminance level, while it converges slowly to contrast of 1.0 as the target luminance depart from the background luminance level at all ambient light conditions.

  • the impact of ambient light variation is much stronger for positive contrast polarity than negative polarity contrast. (Note: this report uses contrast polarity, so positive contrast polarity is light text dark BG, this is opposite from "screen polarity" where positive means dark on light BG, which I used in another post above).

  • The contrast threshold can be drawn on the plots as a black horizontal line. Any target with contrast (including the impact of ambient light) below the threshold line will not be visible to the viewer.

  • An image composed of both contrast polarities letters (Fig. C), it is expected that even for letters of the same contrast, those darkTX-on-lightBG letters disappear first (becomes lower than contrast threshold), as ambient light increases. These results on the mixed polarity are particularly important as it suggests an effective way to compensate for the ambient light.


Some thoughts:

  1. TERMS: I have seen in the research two different polarity types. Contrast polarity, where black text on a white BG is called NEGATIVE, and screen polarity where black text on a white BG is called POSITIVE. Gee can we be more confusing?

For the purposes OF THIS DISCUSSION THREAD, I want to offer these (hopefully more clear) terms based on acronyms:

WOB: for white on black, any light text on any darker background.
BOW: for black on white, any dark text on any lighter background.

And maybe:

WOG and BOG, where the background is near a middle grey value.

  1. The cited report describes what I have found regarding the perception difference of WOB vs BOW page designs. The implication as I have stated earlier is that there need to be a separate accounting or specification depending on design polarity.

  2. The report also shows that BOW is the most stable under ALL ambient conditions, while WOB is more susceptible to brighter ambient conditions, and the middle-grey-background version (C) emphasizes this even more. This directly supports my earlier suggestions that the minimum value for the BRIGHTEST element be around #AAA (or somewhere between #999 and #AAA, TBD).

Next Post: Path Forward.

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 21, 2019

PATH FORWARD

Based on all the research and discussion THUS FAR, I see the following general path forward as far as changes and pull requests for the WCAG:

WCAG 2.2 (and possible errata for 2.1/2.0)

  1. Clearly, correcting the algorithm is too great a change for an incremental version as an algorithm change will necessitate a change in the pass/fail specification.
  2. There are still incremental changes that can be added to 2.2 that will be helpful, such as: a minimum luminance for the lightest element, and maximum for the darkest element (essentially, depending on the A/AA/AAA level performance level, the brightest element be no darker than luminance 35% which is sRGB 63% which is #A0A0A0 and the darkest element as no lighter than 25% luminance which is 53% sRGB which is #888888. All while still maintaining the contrast as currently specified).
  3. Another incremental change to address shortcomings of the current algorithm is to make WOB require a higher contrast than BOW, and similarly:
  4. Address local adaptation, which can be helped by a minimum-padding spec, which is related to 1.4.8 and 1.4.12 for visual presentation and line spacing. (Essentially saying that if a containing div background and the text therein is substantially darker than the overall BG of the larger container, than padding equal or greater than the leading needs to be incorporated in the darker div element).
  5. On visual presentation, emphasize user-adjustible size, which I have identified as being more important than contrast. Make it a FAIL if a webpage does not allow zooming in, and a fail if a webpage overrides user agent default font size or zoom level.
    5b. RELATED: Users should be able to set the default screen height of 1 REM. If page designers design around that base unit. it would allow users to set a larger base font (say 20 instead of 16) and have the rest of the page design maintain relative proportions. However, as a mandate there are issues due to responsive design concerns. The more important issue is disallowing zoom lock on pages!! Personally I find nothing more irritating than a page on my phone that has text too small and won't allow me to zoom in. That's a HUGE problem.
  6. One final incremental change would be to use the correct values in the relative luminance equation so it is in line with the sRGB standard (which it current is not).

WCAG 3.0

  1. At the very least, change the contrast algorithm to a modified Weber contrast, if not another perceptually based contrast assessment.
  2. Modify the standard for contrast minimum & enhanced to use the values created by the new algorithm.
  3. To help prevent confusion, the new algorithm should create contrast values as a percentage so the values are clearly different than those of the WCAG 2.0 standard which uses a ratio.
  4. As a new algorithm/method may reduce or eliminate the need for some of the incremental changes listed above for 2.2, make sure those are altered as needed. For instance, a correct algorithm/method may take positive/negative polarity in design into account.
  5. EVALUATION: If funding can be obtained, ideally a laboratory controlled study would be ideal as part of an evaluation. But also, a wide and public study using a web page based perception test should be included. While such a study would have less under "controlled" conditions, it would provide more under real world conditions, including how users have their browsers and devices setup. This wide study would be cheap and also create a huge dataset. I'd hope to see 50,000 or more responses, which should cancel out the far outliers and provide a real world basis for standards.

THREADS: Should we create a new thread for 3.0, and then set the discussion here just to the incremental changes I've proposed for 2.2?

Thank you all again for all comments and thoughts.

Andy

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 29, 2019

EXPERIMENTAL CONTRAST EQUATION

SAC v0.0a ALPHA TEST

The goal is the relative perceived contrast for each chunk of text should seem the same relative to its local background. With all of the examples below, the SAC experimental math reports the same contrast value. The current WCAG simple contrast value is also listed for each block for reference purposes. Both positive and negative modes, and larger backgrounds of FFF, 777, and 000 are used:

Comments welcome. When commenting, please mention the device used for viewing and the ambient light conditions.

#FFF Surround

Screen Shot 2019-04-28 at 11 17 20 PM

Screen Shot 2019-04-28 at 11 17 03 PM

#777 Surround

Screen Shot 2019-04-28 at 11 13 47 PM

Screen Shot 2019-04-28 at 11 13 24 PM

#000 Surround

Screen Shot 2019-04-28 at 11 15 08 PM

Screen Shot 2019-04-28 at 11 15 29 PM

This Live Experiment page: https://www.myndex.com/WEB/W3ContrastissueTMP

Render notes: Pairs by SAC, L* estimated minimum value, f - 0.05, scale 8

@mraccess77

This comment has been minimized.

Copy link
Contributor

commented Apr 29, 2019

Regarding surrounding background colors -- this can change for responsive and fluid sites -- so this would need to be taken into account at each zoom level 100, 1110, 120, 130, etc. and at each breakpoint variation of a page which would complicate the testing some. It's something to consider when considering to include. WCAG 2.1 has a conformance requirement that each variation of a page must pass. Contrast issues can already vary in these situations as well today but the size of the padding around the text and surrounding content is likely to change even more in responsive variations and fluid sites with zoom.

@Myndex

This comment has been minimized.

Copy link
Author

commented Apr 29, 2019

Regarding surrounding background colors -- ...snip.... Contrast issues can already vary in these situations as well today but the size of the padding around the text and surrounding content is likely to change even more in responsive variations and fluid sites with zoom.

Hi @mraccess77

Yes, responsive design is certainly a factor. It is partly why I mentioned at one point that separating accessibility specs for mobile vs desktop might be useful.

Nevertheless, in responsive design, things like padding can be relative and responsive. The test page right now is not a responsive design — and perhaps I need to make a responsive version — but the padding is set in em, so if the basefont size is relative to screen, then the design elements set as em will all follow. While I have not conducted a validation study, I'd suggest that padding relative to font size is the best practice.

In the examples in my most recent post, vertical padding is probably the tightest it should ever be at 0.5em and even so I think that's too tight when against the #FFF background. This points to the main issue, and that is where the div containing text, and the contrast of text to that containing div, is substantially different than the overall page luminance. So with a dark div and darker text against a white page, the eye adapts to the brightest white, and padding needs to increase — this is something any skilled designer knows, but it has a direct effect on accessibility too.

@mraccess77

This comment has been minimized.

Copy link
Contributor

commented Apr 30, 2019

Hi @Myndex distinction between mobile and desktop is really not relevant for most folks today. Most low vision users zoom in and get the responsive versions of a site -- I use 800x600 on my desktop and often see the mobile version. People use laptops in the sun and outside, etc.

On a tangential note relate to adaptation -- my eyes are drawn to white -- so a white/bright bar at the top of a presentation/document will draw my eyes away from the text to the bright area by default. While this is vision related -- I'd suspect that there are similar distraction attributes that some people with print disabilities have that draw their attention away from the text as well. So it might be worth discussion other print disabilities into the contrast calculations if there is research there to guide us.

@Myndex

This comment has been minimized.

Copy link
Author

commented May 1, 2019

Hi @Myndex distinction between mobile and desktop is really not relevant for most folks today. Most low vision users zoom in and get the responsive versions of a site -- I use 800x600 on my desktop and often see the mobile version. People use laptops in the sun and outside, etc.

Hi @mraccess77
Fair enough, though I don't mean just relating to contrast. For instance, some site lock or block zooming in, which I find really unacceptable.

One of the things I've been talking about in this thread, and research supports, is that zoom level/font size is the real critical factor.

No website should be able to override my defined zoom level or root font size. In fact, as a designer, I'd submit that page design should be based around root font size, so a user with an impairment can make a single change by to their root font size that will affect all pages (and apps for that matter).

And personally, for displays, I have a terrible time here on GitHub. The background is way too white and causes significant vision problems for me. Bight white background cause an adaptation to a brighter light and causes pupil contraction, which increases sharpness (good right? NOT!) The problem with increasing sharpness for me is that the vitreal floaters in my eye become more distinct and thus they block vision MORE. Too high of a contrast with a bright light background is terrible for me for instance.

On a tangential note relate to adaptation -- my eyes are drawn to white -- so a white/bright bar at the top of a presentation/document will draw my eyes away from the text to the bright area by default. While this is vision related -- I'd suspect that there are similar distraction attributes that some people with print disabilities have that draw their attention away from the text as well. So it might be worth discussion other print disabilities into the contrast calculations if there is research there to guide us.

You mean output to paper? Today LRV guides physical media, and that's actually where most things relating to perception (like Weber) began (Weber was circa 1860). As LRV and luminance are essentially synonymous, much of that work can be applied to displays, though to be sure, display perception has several unique qualities that more modern approaches can model.

Cheers

@alastc

This comment has been minimized.

Copy link
Contributor

commented May 1, 2019

zoom level/font size is the real critical factor... page design should be based around root font size...

Let's keep focused on the contrast in this thread, we have other guidelines for sizing, and so long as contrast is a factor we can test for and improve we will keep it in scope.

As a sidenote about the sizing methods, I've written about that a lot, here's a summary of EMs vs Pixels & zooming.

Second sidenote on your personal experience: Have you tried browser plugins that adjust the colors for you? E.g. with Change colors you can create a standard color over-ride, or with Stylus (and some CSS know how) you can create custom over-rides per site.

@Myndex

This comment has been minimized.

Copy link
Author

commented May 1, 2019

zoom level/font size is the real critical factor... page design should be based around root font size...

Let's keep focused on the contrast in this thread, we have other guidelines for sizing, and so long as contrast is a factor we can test for and improve we will keep it in scope.

Yes of course, I just want to emphasize that size and contrast are interdependent, not independent.

As a sidenote about the sizing methods, I've written about that a lot, here's a summary of EMs vs Pixels & zooming.

Thank you will read.

Second sidenote on your personal experience: Have you tried browser plugins that adjust the colors for you? E.g. with you can create custom over-rides per site.

I'm using Safari, and the Safari extension that does that (Colorum) cannot change the Github pages.

This is one of those examples where the PAGE does not allow CHANGE to help with accessibility.

I have not tried Github on Chrome with the plugin you mention, will try that shortly. But Github blocks that with Safari and the safari extension. Grrr! And this is partly what I am talking about - no site should be able to lock me out of making an appearance change that I need for accessibility reasons, yet they do, ALL THE TIME.

EDIT: It works in Chrome!!! Wow thank you, my eye's thank you. I can actually read github now! Guess I'll need to use chrome for GitHub, not sure why Safari has been unable to do it.

@patrickhlauke

This comment has been minimized.

Copy link
Member

commented May 1, 2019

no site should be able to lock me out of making an appearance change that I need for accessibility reasons, yet they do, ALL THE TIME.

for what it's worth, that's not a failing of the sites or their authors. it's a failing of how some of the extensions/plugins inject their custom styles and the fact that browsers themselves have removed the ability to properly define user styles, which would take precedence over page styles.

@alastc

This comment has been minimized.

Copy link
Contributor

commented May 2, 2019

If it works on some sites but not others (like Github) it is probably prevented by the Content Security Policy, but a good browser/extension combo is not affected by that for the reasons Patrick mentioned.

@Myndex

This comment has been minimized.

Copy link
Author

commented May 2, 2019

Thank you @patrickhlauke and @alastc — that makes sense... I am actually concerned about how these third party extensions are accessing page data including secure information.

It sounds like browser developers need some "standards" then — way way off topic from here — I guess that'll be the next crusade.

OS X does have invert and contrast in system prefs, but that affects the entire system, and is this really cumbersome to use.

RELATED:
One of the perception problems I've discovered using multiple systems and browsers over the last week of testing is the anti-aliasing of small text varies quite a bit per system (and screen pixel density), so small text often ends up much much lighter (and lower contrast) than it was specified in the CSS. This is more a browser rendering/system display problem than a design problem, though designers should be made aware in terms of a "guideline." Nevertheless, another point in how complex the issue really is.

A

@patrickhlauke

This comment has been minimized.

Copy link
Member

commented May 2, 2019

designers should be made aware in terms of a "guideline."

but designers have no influence over this, as it comes down to individual users and how these users have set up their system (similar to how designers have no influence over other factors, like whether or not a user's monitor is properly calibrated, uses an appropriate color profile, is set too bright or too dark, etc).

the only appropriate action here seems to me an informative note somewhere (perhaps in the understanding documents) that explains that regardless of any contrast calculation (even with the updated algorithm) things may not work perfectly for every user due to variations in actual rendered output / viewing conditions.

@alastc

This comment has been minimized.

Copy link
Contributor

commented May 2, 2019

so small text often ends up much much lighter (and lower contrast) than it was specified in the CSS.

This looks closely related to issue #665, the discussion there is useful.

@mraccess77

This comment has been minimized.

Copy link
Contributor

commented May 2, 2019

Regarding overwriting page styles -- even with a browser extension it is very hard to overwrite page styles without throwing out author styles completely. This is because of the CSS rules that govern hierarchy with ids because highest, etc. even when !important is used.

If you throw out page styles altogether then you lose so much of the feel of the sites and other visual clues that are used for grouping elements, etc. and so it's not really fair to a low vision user to lose all of this information that they didn't have an issue with to gain contrast on another element that had less than sufficient contrast. That's why despite some people saying we don't need a contrast SC anymore because users can just overwrite the colors -- I disagree with them.

Most mobile browsers don't support extensions and so you can't get user styles in many of those situations as well.

@Myndex

This comment has been minimized.

Copy link
Author

commented May 3, 2019

designers should be made aware in terms of a "guideline."

but designers have no influence over this,

Yes they do, they choose fonts and size. Using too thin/light of a font at too small a size is going to have anti aliasing problems which reduce apparent contrast. What I am saying is that a "guideline" on how small and light fonts render with less contrast due to antialiasing. Something like:

"It is important to remember that even if a color pair is a PASS by the contrast checker, that small and thin fonts may be affected by antialiasing effects that will reduce perceived contrast".

@Myndex

This comment has been minimized.

Copy link
Author

commented May 8, 2019

NEW EXPERIMENTS UP — CE10 and CE11

This examines more in detail the Modified Weber and the SAPC 3 equations. SAPC 3 has many more patches to examine as it is appearing most uniform at the moment.

There are two versions, dark text on light BG (CD10) and light text on dark BG (CD11),

NOTES:

  1. THIS IS AN ALPHA LEVEL TEST — WORK IN PROGRESS! Among other things, these pages are NOT responsive, and intended for desktop only.

  2. The FONT for these tests is Avenir, and the CSS sheet does have a font-face tag set. But if overridden, or the font does not look like the sample below, please let me know. I'll probably switch to something like Arial for the wide area test. I've been using this because the "normal" is fairly thin, and the "bold" is fairly bold. (The normal is the font's "medium" and the bold is 'black").

  3. The SAPC3 percentages are scaled with the aim of making an "easy to remember" set of values:

  • 100% — Similar to the 7:1 ratio for AAA Enhanced contrast.
  • 80% — Similar to the 4.5:1 ratio for AA contrast.
  • 70% — (Intended for large/bold text contrast.)
  • 60% — Similar to the 3:1 ratio for "non-text" contrast.
  • 8% — 50% are shown for analysis of thresholds in various lighting conditions. HOWEVER it is also clear that the overall DIV is very well defined at 25%.
  1. Each DIV has a very large border - this is to emulate the BACKGROUND the DIV is on (i.e. consider the contrast between the DIV contents and the Border, and NOT between the border and the overall page in this case). The OVERALL BG of the page at #777 is not part of the judgement criteria.

A CE10 sample:

Screen Shot 2019-05-07 at 11 03 03 PM

A CE12 sample:
EDIT: CE12 is a more uniform test of light text on dark than CE11. CE12 also demonstrates threshold levels more effectively. Ignore CE11.

Screen Shot 2019-05-07 at 11 02 27 PM

Please comment or questions!

Thank you,

Andy

@alastc

This comment has been minimized.

Copy link
Contributor

commented May 8, 2019

Hi @Myndex,

Thanks, I'm having a look, just trying to work out how I should try to evaluate the results. (I'm not really target audience, but as an example.)

For example, in CE12 at 80%, the first one appears to have a bit more contrast than the second, although when I look away and look back I sometimes make a different choice! The last one at 80% appears the most contrasting to me, with the really white text. (I know the backround is lighter there, but it still stands out more to me.)

The text size within each doesn't particularly affect me, I just have to focus a little more at the sub 0.7rem lines. Actually, looking at the 60% or less examples I do struggle at under 0.8rem.

Is the idea that people rank by contrast, and we see which measure matches that best?

Or perhaps: What is the smallest line you can confortably read?

@Myndex

This comment has been minimized.

Copy link
Author

commented May 8, 2019

Hi @alastc

Thanks, I'm having a look, just trying to work out how I should try to evaluate the results. (I'm not really target audience, but as an example.)

Everyone is the target audience for this, actually, This is not to evaluate any particular "standard criteria". STEP ONE in solving this issue is finding an equation that is "more perceptually uniform" across a RANGE of lightness/darkness.

That is, For a given series of dark-to-light samples at a given percentage, each one should seem "perceptually" the same contrast.

Such an equation is "impossible" because room ambient light and monitor brightness interact. This is seen most easily in the threshold level tests.(At the bottom of the pages). On CE10 (dark text on light), go near the bottom of the page for instance to the 18.5% series (or even the 8% series) as these are close to threshold they are most "obviously" affected by room light and screen brightness. The darkest samples and the brightest samples will be more or less readable based on screen brightness and room light, and not at the same time unless the screen and room light are "exactly ideal".

For example, in CE12 at 80%, the first one appears to have a bit more contrast than the second,

Are you looking at the MODWEBER 80% or the SAPC 3 80%?

I think I just realized I need to put serial numbers on all of these so we can communicate about each! Oops (like I said this is an ALPHA test).

although when I look away and look back I sometimes make a different choice! The last one at 80% appears the most contrasting to me, with the really white text.

This is called ADAPTATION. And also points out how total screen luminance is a big part of the perception of contrast.

If you look away, you adapt to something ELSE, and then look to the screen, and it seems different due to different adaptation effects.

(I know the backround is lighter there, but it still stands out more to me.)

The "MODWEBER" light text on black at 80 will look more contrasty at the lighter value, and is a little less uniform than SAPC 3. Scrolling down to SAPC at 70%, each test patch should seem closer.

Still the darkest test patches will have the greatest variance due to a variety of conditions of light adaptation etc. The darkest patches are a big part of my focus as they are what is most wrong with the current math assessment.

THE OVERALL IDEA

The main idea here is to find a setting/equation that gives a similar, relative, perceptual contrast across an entire range of conditions of light, screen brightness, etc. By doing this, any two numbers plugged into the equation will give a reasonably predictable answer on the resultant perceived contrast.

The text size within each doesn't particularly affect me, I just have to focus a little more at the sub 0.7rem lines. Actually, looking at the 60% or less examples I do struggle at under 0.8rem. Is the idea that people rank by contrast, and we see which measure matches that best?

Using the smallest text as a "place to look" does every patch in a group of a particular contrast setting seem equally readable.

Or perhaps: What is the smallest line you can confortably read?

This is less about absolutes like that (at least right now — absolutes will be determined by experiments using a test with test subjects).

Looking at CE10 and CE12, I want to find math where each patch is equally readable (perceptually uniform) at a given target contrast.

Note that due to 8 bit resolution, the percentages will all be a little off the exact target, and that's not terribly important. mainly looking to define math that returns perceptually uniform results over a range of dark/light at a given percentage.

@Myndex

This comment has been minimized.

Copy link
Author

commented May 8, 2019

NOTE: I removed the MODWEBER tests as I'm concerned they were causing confusion.

They will be placed in a separate document.

@Myndex

This comment has been minimized.

Copy link
Author

commented May 9, 2019

On some initial perceptions:

Hi @alastc
From the other issue as it relates here:

However, whether there should be a guideline to prevent maximum contrast is a slightly different question. There is a certain amount of user control that is easier to apply to too-much contrast, compared to not-enough. If someone made a solid case for a guideline about too much contrast it would be considered.

This is certainly part of the process in the experiments for #695 — There are existing guidelines and standard though as well. FAA Human Factors specifies the following:

  • 3:1 — MINIMUM contrast.
  • 7:1 — PREFERRED or recommended contrast
  • 15:1 — MAXIMUM contrast.

On the subject of maximum contrast, as I have different impairments in each eye I can discuss my own issues as they relate to this.

My Vision and Experiences With These Tests

My left eye, with a Symfony IOL implant has a substantial vitreous detachment and floaters., some of which tend to "hang out" right in front of the fovea. The right eye with a CrystalLens IOL has developed a membrane (soon to be removed by YAG) so at the moment that eye has both lower contrast and increased scatter similar to the cataract the IOL replaced.

YUK!!

But it is helping me evaluate these conditions of contrast (I am trying to get through a lot of this before the membrane is removed in a couple weeks, LOL).

In short, my personal experiences:

  1. Excessive contrast causes SCATTER, glare, and increased chromatic aberration. While counter intuitive, too much contrast will actually cause a decrease of acuity especially in older eyes where scatter is an increasing concern.
  2. My Symfony IOL in the left eye is of the type that higher contrast causes significant artifacts. Big giant halos around car headlights, for instance.
  3. The floaters in my left are are most prominent and problematic with dark text on a white background. The adaptation caused by the bright background results in the floaters becoming sharper and higher contrast, increasing the blockage effect. Light text on a dark background helps this.Dark text on a medium background is actually helpful as well.
  4. The right eye is exhibiting poor contrast, bad scatter, and visual acuity issues. If I remove my glasses it is equivalent of about 20/200+. I can tell you from my experiments and this is supported by other research, that higher contrast creates other problems.
  5. Based on current experiments CE10, the "maximum contrast" DIV (146%) is much worse in terms of acuity than the 70% series. I even found the 100% series too harsh, though those are set at essentially the AAA Enhanced standard.
  6. With that in mind, I will tell you that once above about 40% contrast in CE10, increasing contrast does not help in improving acuity. Without glasses, to experience the worst impairment, font size and weight are the greatest determining factors. Because those test patches have a variety of sizes and weights, I can tell you for myself based on these observations today:

The following were with a MacBook with the screen approx 12"-14" away and high room lighting reflecting on the screen, and it the "gloss" screen, and screen brightness set to less than the halfway point.

The following contrast figures are using the output of the SAPC 3 method, measured as a percentage.

Right Eye Only no glasses:

  • The line at 2rem becomes readable all the way down to 35%. for all patches, and the mid-grey patches for 25%. There is no improvement in readability at 50% and above. at 80% and above, chromatic aberration and scatter causes multi-colored multiple images that makes reading harder than at lower contrasts.
  • The BOLD text of the line at 1.5rem becomes readable at around 45%. The thin part never becomes readable at any contrast. Similarly, CA and Scatter cause problems at 80%+
  • No other lines are readable regardless of contrast.

With BOTH eyes, glasses off:

  • The line at 2rem becomes readable all the way down to 8%. for the mid-to dark patches and is barely readable on all patches at 18%. At 25%, the 2rem line is easily readable. There is no improvement in readability at 40% and above. at 80% and above, chromatic aberration and scatter causes the letters to seem to "vibrate".
  • The 1.5rem line becomes somewhat readable at 35%, and at 50% 1.5rem down to (maybe) 1.2rem get readable more or less in the darker patches. Above 70%, they vibrate or have stronger double images.
  • No other lines are visible regardless of contrast.

WITH glasses and both eyes:

  • I can read the 2rem line at all contrasts including all 8% patches.
  • I can read the 1.5 to 1.1rem lines starting at the darker 8% patches.
  • I can read down to 0.9rem on the few darkest of the 18% patches.
  • at 45% on the darker patches I can read all lines including 0.5rem (8px)
  • 50% I can read all lines on all patches at all brightnesses. No higher contrast causes any improvement, though for me I find the darker patches are generally better.
  • At 80% and above, the letters gained a noticeable color halo.

As I read more existing research as well as conduct my own studies and experiments, it is clear that contrast is just one part of the overall considerations for visual accessibility.

  • Visual accessibility is inseparable from font-size and font weight.
  • Contrast needs to be "within a range" with a lower limit and an upper limit to support best accessibility.
  • For practical purposes, the "lower limit" for contrast is most critical. The lower limit depends on the subject/target size and weight values.
  • As such, contrast specifications are inseparable from the font size and weight.
  • Contrast is also tied to local adaptation, surround effects, (those are affected very much by the padding in an text container), and relative dark/light of the elements in question.
@Myndex

This comment has been minimized.

Copy link
Author

commented May 9, 2019

Interim thoughts based on the present research. These are "alpha" thoughts, not absolute conclusions by any means.

Along the way through experimentation, observation, existing research, and my own design experience, I find the following items are interdependent as they pertain to legibility:

Within Designer's Control

  • Font Size. Size is the single most important criteria for legibility. A challenge for standards purposes is that different fonts can render at inconsistent sizes relative to "1rem", a problem made even more complicated with responsive design needs and a vast array of device sizes. For the purposes of size, it is vertical size as rendered on screen that is the critical factor.
  • Font Weight. Extremely thin fonts have their contrast affected by antialiasing effects which can substantially reduce the contrast of small or thin fonts.. Very bold fonts will often appear more contrasty than normal fonts. All made more complicated as every font design has unique attributes regarding its weight. Included in weight considerations is the aspect ratio of the font.
  • Luminance contrast of the font color relative to the immediate surrounding background color. Luminance contrast is more importnat than color contrast for legibility (shown below). Nevertheless, once an acceptable contrast level is reached, increasing contrast does not help acuity — in fact too much contrast can cause additional problems. Contrast is the main thrust of this issue thread, with the aim to create a method to programmatically evaluate contrast that returns an assessment that is perceptually accurate the full range of luminance.
  • Padding size around the text when the text container color is significantly different from the larger page luminance, creating a secondary contrast issue. This relates to the Bartleson-Breneman effect, aka surround effects. It also addresses light adaptation to a degree (see "Relative Luminance" below). Included in padding considerations is spacing for characters, words, and lines.
  • Secondary Design Elements. Things like drop shadow, borders, outlines, and other effects. These are listed last as they can be either helpful or detrimental to legibility, but are particularly hard to assess programatically.

Outside Designer's Control (User Related)

  • Relative Luminance of the display and displayed content, and ambient lighting. This has the primary affect on light adaptation. It is mostly in the hands of the user, who typically has a reasonable range of adjustment.
  • Visual acuity and contrast sensitivity of a given user due to visual impairments or cognitive issues. The breadth and variety of impairments, including normal vision through aging, makes this a challenging subject. Those with severe impairments may be able to use assistive technologies. This leaves the question, where is the bright line in terms of accommodation standards by design where no assistive technology is used?

So while the display luminance and environment are outside of our control, they are not a mystery either. We know that ambient light is a consideration as it lowers contrast. We know that the 80 nit specification for an sRGB monitor is far obsolete, when consumers have phones that can go well above 1200 nits, and a even the cheapest phones can display 400 to 500 nits. We also know that modern LCD displays have better contrast and lower glare/flare than the CRTs some standards were written around.

Also with visual impairments, while there are many (I have personally experienced several), most are well understood. Severe impairments require assistive technology beyond what a designer can do. But within the designer's purview are things such as not locking zoom. I've personally encountered many sites I could not access because I was unable to zoom in to enlarge the text. Such restrictions on a user adjusting the display are completely unacceptable.

Similarly, designers going for form over function, or toward trendy hard to read fonts and color combinations (which technically PASS current WCAG standards, yet are illegible) is indeed a problem. Fortunately many browsers now offer a "reader mode" that disposes of all the trendy CSS de-enhancements.

A few considerations regarding common eyesight degradation:

  • Red and blue wavelengths are the farthest apart, which means they tend to create the greatest problems with chromatic aberrations (CA).
  • CA and scatter are substantial problems with eyes over 40.
  • Excessively high contrast increases CA and scatter.
  • While a bright white webpage with black text may help readability because the bright light causes pupil contraction (which improves sharpness), the bright background causes problems for older eyes with vitreous detachments and floaters.
  • Visual acuity and contrast sensitivity are only partially related. It is possible to have good acuity and poor CS and vice versa.
  • Colors affect acuity. Blue light is sensed by the eye's S cones, which make up only 2% of the cone receptors in the sys, and they are all scattered far away from the fovea (center of vision). Green and Red light is sensed by the M and L cones, and they are much denser and centered in the fovea. The implication here is that colors with a lot of red or green tend to be resolved the best.
  • People with color sensing deficiencies (color blindness) are most typically deficient in red cones, or sometimes green. As it happens though, red and green cones overlap substantially in terms of wavelength sensitivity. The implication is that when considering design for color vision problems, the designer cannot rely on the color contrast between red and green.

Screen Shot 2019-05-09 at 12 50 33 AM

The above red and green are at the same luminance (#FF0000 and #009400) so the luminance contrast is zero (or 1:1 in WCAG math). The blue should be readable as it's much darker, despite being at the max of #0000FF. This is because blue makes up only 7% of luminance. In the above example, the blue is 2.1:1 per WCAG math, or SAPC3 28%.

BELOW: The red and green are set at the same luminance as maximum blue.
(R #9E0000 G #005a00 B #0000FF).

Screen Shot 2019-05-09 at 12 56 58 AM

  • What this shows is that luminance contrast is more important than color contrast for readability. But it should also show that a normally sighted person may mis-interpret the lack of a luminance contrast. I the top image, the red and green are at the same luminance, but the values are FF red and 94 green. To someone not familiar with color theory, this may seem like the green is at a lower luminance than red, yet they are at the same luminance in sRGB.

There are several related issues here on GitHub
They are this (695), and also #665, #713, #360, #236, #700, and #346. The research I am discussing in this thread applies to a number of other issues including these. All are tightly intertwined in terms of how a standard (for WCAG 3.0 and beyond) should be developed, but the main focus for THIS issue (695) is developing a method for easy programatic contrast assessment that provides values that are perceptually uniform and therefore accurate relative to human perception.

Current Experiments
In the current experiments CE10 and CE12, I may have over compensated for darker color pairs slightly, but still evaluating in different ambient conditions.

Once a stable method of assessment is found, the next step is a trial with test subjects to evaluate specific range limits for accessibility criterion. At the moment I am working on a webapp with the idea that people all over can go through the assessments using the device and environment they normally use. While that lacks a clinical control, if the sample size is large enough the data should be very instructive.

@Myndex

This comment has been minimized.

Copy link
Author

commented May 9, 2019

PULL REQUEST FOR 2.2

Two weeks ago I indicated that an incremental pull request for a few ideas could/should be added for WCAG 2.2 (not a new equation, just a couple refinements). Those proposed items are:

1. Minimum and maximum luminance. Using the current WCAG math, set specific minimum luminance for the lightest element and maximum luminance for the darkest element. This should prevent some of the combinations that "pass" but are illegible.

AAA: a WCAG 7:1 contrast AND the lightest element no darker than #B0B0B0 (43.4% luminance).

AA: a WCAG 4.5:1 contrast AND the lightest element no darker than #999 (31.8% luminance).

A: a WCAG 3:1 contrast AND the lightest element no darker than #808080 (21.6% luminance).

2. Minimum padding. Using the WCAG Contrast math, if a text container (a DIV or P etc.) has a background that is a contrast ratio of more than 1.5:1 against the larger background it is on, it requires a minimum padding of 0.5em around the text. A padding of at least 1em is advised if the contrast between the DIV and the larger page background is a contrast exceeding 3:1

3. sRGB corrections @svgeesus indicated he was going to do this, namely correct the sRGB math issue and remove the reference to the sRGB working draft which is obsolete.

Before I start forming the pull requests, I thought I'd bring these up again for discussion. The justifications for 1 and 2 are the experiments shown throughout this thread. The justification for 3 is that the math is wrong, and should be correct in a standards document.

@alastc

This comment has been minimized.

Copy link
Contributor

commented May 9, 2019

In order of magnitude:

(3) sRGB corrections, if @svgeesus is tackling those, best to leave that part alone.

(1) Including min/max luminance, could you point to which of the many bits posted above shows that?

I'd rather get past "should prevent" to having some evidence of preventing that (i.e. testing with people), and to understand what sort of combinations would now be ruled out (as that can cause legal changes).

The current SC text says "contrast ratio of at least 4.5:1", with no caveat on that, so it would be a large change to a current SC, which is more difficult to do.

(2) Min padding: This requires more discussion, it changes the testing criteria quite a lot and starts to dictate the design.

For example, is a measure relative to the font size necessary? Larger text has larger padding, but is that flowing from how perception works? Nievely I might assume that smaller text would need proportionally larger padding.

Also, must it be extra padding? If you don't have that sort of padding, would increasing the contrast on the inner background have a similar effect?

@Myndex

This comment has been minimized.

Copy link
Author

commented May 9, 2019

Hi @alastc

In post #695 (comment) above

About half way down the post it shows the need for padding, from a bit that WebAim had on their site, showing how 4.5:1 does not verify legible contrast in that case. Here is the example image for convenience
LocalAdaptation
:
That was the post where I brought local adaptation and surround effects into the discussion.

The Suggested Limit Values
The figures I listed in my above post were taken from experiment CE10, as limit levels that would prevent problems such as the dark blue text problem in that clip in post 483805436.

The thinking is that it would be a good incremental change that would help "pave the way" toward an equation change in WCAG 3.0

Perhaps it should be listed as an "advisory" ? As in "As an advisory for future accessibility standards, it is recommended to limit the brightest color to be no darker than..."

  1. This again is interdependent on the other 4 "designer control" factors listed in post #695 (comment) from late yesterday. Those being font size, weight, luminance contrast, padding & spacing, and secondary elements.

To answer your contrast question, yes all five factors all work together. So there are certainly contrast combos that don't "need" padding, which is why I mentioned relative contrast as a key to padding size. Here's the whole tidbit from WebAim:

Screen Shot 2019-04-13 at 1 29 03 PM

The yellow text container has almost no contrast against the white (1.07:1) so it would not need padding per the proposed spec. The blue grey though - the grey is contrast 3.9:1, so per my suggestion it would need a 1em padding as it is over 3:1 ... However, yes, if the dark blue text was instead white like the surrounding background then padding would probably not be needed as that white text would be close to the white BG the eye is adapted to and the grey would be in essence a single contrasting element.

So here is that example piece again, but replacing the dark blue text with white, making the text 3.9:1 against the grey:

Screen Shot 2019-05-09 at 7 36 54 AM

And if we make the text and container darker, here the text is at 1.5:1, and the grey is darker to maintain 4.5:1 .
Screen Shot 2019-05-09 at 7 43 02 AM

So with this example it would appear that so long as either the DIV or the TEXT it contains are at a contrast less than 1.5:1 to the larger background padding isn't specifically needed, as opposed to what I originally suggested regarding just the DIV's contrast as the deciding factor.

(Just to note here as another example, the top one with white text is WCAG 3.9:1 contrast and the lower one with grey text is WCAG 4.5:1 contrast, but the grey text is a lower perceptual contrast despite being a higher reported contrast from the current math.)

As to size, I was thinking relative to body text/block text. But a big bold headline font would likely not need a full 1em around it. So perhaps em is not the right unit to use, I personally use em because it's relative and thus useful in responsive situations.


So I understand your concerns for both of these. Nevertheless I hope I have provided ample support for them through this thread. Even though they are not the prime targets, they are both part of a path toward a unified and perceptually accurate visual assessment criterion.

But if making them as a "standard" or rule is too much, perhaps there is a way to provide recommendations that are not "hard rules" with the idea that those recommendations will evolve into rules/standards at a later date. This would allow for some "easing" into larger changes, letting designers and testers know the direction something is going to change to.

@Myndex

This comment has been minimized.

Copy link
Author

commented May 10, 2019

Updated Evaluation Pages.

CE14 (Dark text on light BG) and CE15 (Light on dark) are up.

There is a serial number on each patch, which relates similar patches between CE14 and CE15 for contrasts 40% and up. The serial number should help in discussion as ther are ove 100 patches per page.

Patches are grouped by contrast, in groups for 100%, 80%, 70%, 60%, 50%, 40%, and several lower contrast groups including threshold tests.

I anticipate that 100% will relate to WCAG 7:1, 70% to 4.5:1, 50%: to 3:1, and 25% to 30% I believe will be a good target for a DIV against a background. All to be determined by a live study.

NOTE: this math (SAPC) is very slightly opinionated in that the dark values are given just a little more weight as they are most affected by ambient light. Also, depending on monitor settings you may perceive a very slight increase in contrast in the middle greys, this is related to the "system gamma gain" which is not implicitly being counter-acted (as it is not encoding either). And this effect varies depending on monitor setting.

The serial number is at the end of the largest line of text,such as this #CE14-19. This should make it easier of anyone has questions or comments.

Screen Shot 2019-05-09 at 10 29 47 PM

As a side note, and to emphasize the importance of surround effects, in the earlier post above, this little tidbit:

Screen Shot 2019-05-09 at 10 42 07 PM

Those colors are WCAG 4.5:1 contrast yet terribly hard to read. Here are those exact same color values, but on a DIV with ample padding:

Screen Shot 2019-05-09 at 10 39 46 PM

In the case of a bright white page like this one on GitHub, dark colors like this require ample padding.

I will answer any questions or comments, or further discussion, but otherwise I'm going to let these issues rest a bit to sleep on it.

@WayneEDick

This comment has been minimized.

Copy link

commented May 24, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.