Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(a11y): update scoring weights based on severity #8823

Merged
merged 8 commits into from
May 3, 2019

Conversation

robdodson
Copy link
Contributor

Summary

Re-weight a11y audits using the same severity scale that axe uses for their docs.

Items which really should be critical failures were weighted too low.

https://dequeuniversity.com/rules/axe/3.2/

Related Issues/PRs

#3444

Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aria- ones got big upgrades. so did meta-refresh
bypass got de-prioritized a bit.

and label/image-alt/link-name just arent as big deals as they were.

all fine with me just calling it out.

@paulirish
Copy link
Member

can you leave a comment somewhere that indicates where we can pull these severity weights from? i feel like it is in the axe response somewhere..

Copy link
Collaborator

@patrickhulce patrickhulce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like there's not a ton of differentiation anymore, should the gap between critical and minor issues be wider?

originally it was also supposed to include how common it was, is that not much of a concern anymore?

@robdodson
Copy link
Contributor Author

robdodson commented May 3, 2019 via email

@robdodson
Copy link
Contributor Author

OK I staggered things a bit more so minor is 1, serious is 3, and critical is 10. In testing this seems to produce a result that feels about right.

This change is tricky because I think a lot of sites will see their scores go up but that's because we were penalizing them for issues that were common but not necessarily critical failures. On the flip side, there were critical failures that we were severely undercounting because they were uncommon.

For example, having a button without a label—if it's the only failure on the page—is currently worth ~21 points. But misspelling the button role on a div is only worth about 8 points. Without a proper role the div is not announced as a button to the user, so it feels like it's just as bad of a failure, but it's undercounted.

In the new scheme they are both worth something like 10 or 11 points.

Things which are common, but not necessarily show stoppers, like color contrast are downranked from critical to serious. A lot of sites will benefit from this, but it really didn't make sense to say it was "more severe" than having uncaptioned video or audio. Those are legitimate barriers to access, even if they are less common.

One thing that does make me feel better is the W3C bad accessibility example pages all score lower in the new scheme than in current Lighthouse. That's because they have more legitimate barriers to access so they get penalized more.

can you leave a comment somewhere that indicates where we can pull these severity weights from?

@paulirish in the json or here on github?
The ranking comes from deque's scoring system. If you click on an audit listed here: https://dequeuniversity.com/rules/axe/3.2/ there is a column on the right hand side that shows severity. Everything falls into either minor, serious, or critical.

@robdodson robdodson requested a review from exterkamp as a code owner May 3, 2019 05:38
@robdodson robdodson changed the title Add new a11y weights. core(config): Add new a11y weights. May 3, 2019
@brendankenny
Copy link
Member

proto/sample_v2_round_trip.json

sorry, I changed some stuff. You may need to git checkout master -- proto/sample_v2_round_trip.json and then run yarn update:sample-json again

@brendankenny brendankenny changed the title core(config): Add new a11y weights. core(a11y): update scoring weights based on severity May 3, 2019
Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @robdodson! New scores LGTM after we sort out this sample_v2 business

@robdodson
Copy link
Contributor Author

@brendankenny I have done the thing.

@paulirish
Copy link
Member

@paulirish in the json or here on github?

in the default-config.js right at the top of the auditRefs block.

@robdodson
Copy link
Contributor Author

Added

@@ -79,7 +79,8 @@ const UIStrings = {
seoCategoryManualDescription: 'Run these additional validators on your site to check additional SEO best practices.',
/* Title of the navigation section within the Search Engine Optimization (SEO) category. Within this section are audits with descriptive titles that highlight opportunities to make a page more usable on mobile devices. */
seoMobileGroupTitle: 'Mobile Friendly',
/* Description of the navigation section within the Search Engine Optimization (SEO) category. Within this section are audits with descriptive titles that highlight opportunities to make a page more usable on mobile devices. */
/* Description of the navigation section within the Search Engine Optimization (SEO) category. Within this section are
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

errant hit of the return key? :) this is breaking the strings test (I think maybe we aren't set up to extract multi-line comments into message descriptions) and linting (no trailing spaces)

@brendankenny
Copy link
Member

thanks for getting this in so fast, @robdodson!

@robdodson
Copy link
Contributor Author

sure thing! Thanks for being so accommodating y'all

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants