Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requirements feedback #41

Closed
WilcoFiers opened this issue Jul 29, 2018 · 5 comments · Fixed by #732
Closed

Requirements feedback #41

WilcoFiers opened this issue Jul 29, 2018 · 5 comments · Fixed by #732
Assignees
Labels
internal comment from a participant migration: other Issues that do not fall into the other three categories

Comments

@WilcoFiers
Copy link
Contributor

WilcoFiers commented Jul 29, 2018

Some feedback on https://w3c.github.io/silver/requirements/index.html

1. Introduction: Explain how to solve

The introduction says "We need guidelines to: explain how to solve the problems they pose". I don't think it is the job of any requirement to explain "how". It's job should be to explain, what, why and for who. One of the problems in WCAG 2 IMO is that in several places it prescribes a solution as the only solution.

1.2.1 Readable

This seems to contradict the common wisdom that "if you build something for everyone, it works for no one". The ACT TF has identified two key requirements for testers: requirements must be unambiguous, and applicability must be objective. These areas can not be sacrificed for the sake of readability, as an unclear "definition of done" is (in my opinion) far more problematic than a requirement that's difficult to understand.

1.2.2 Measurable Guidance: certain disabilities may not be measurable

I have a major problem with the statement that needs for people with certain disabilities may not be measurable with a pass/fail statement. If you can put a number to it, you can turn it into a pass / fail requirement. We did that with color contrast 4.5:1. In a very real sense, if you can't measure something, it doesn't exist. The problem isn't that we can't measure certain requirements. The problem is that we don't currently have consensus on how to measure such things. That is a VERY different problem. Silver (or maybe WCAG 2.2) may need to develop new metrics. Conformance MUST be a pass or fail statement.

1.2.2 Task-Based Assessment

Is the suggestion here to abandon page scope? I am skeptical about this idea of expressing conformance based on tasks. At least not in a way that significantly improves on defining accessible processes (as exists in WCAG 2 already). I think we do need a way to express conformance in terms of websites and web apps. This is sorely lacking in WCAG 2 today. I would also be stronlgy in favour of adding some sort of component based conformance model. The web has a ton of third party content. I have no problem with task based conformance, but I think it is far lower down the priority list than these others.

1.2.2 Accessibility Supported

I think accessibility support model needs more than "guidance". I think Silver should rethink responsibilities all together. In WCAG 2, content authors are entirely responsible for accessibility. They are responsible for working around poor support of web standards in assistive technologies and user agents. They are even responsible for user generated content. There is also this hidden layer under WCAG, where for most SCs users are assumed to be responsible for their own assistive technologies, but for some (color contrast for instance), the content author is required to solve the problem without requiring assistive technologies.

1.2.2 Measurable: 100% passing content

One of the things I missed was this idea that content may conform to the requirements, even if there are still low impact issues on a page. I know this idea has been discussed in Silver, so I'm unsure why this didn't make it into the 1.2 section.

1.3.3 Governance:

Accessibility guidance and all supporting documentation should be as forward looking and future friendly as possible

This was a red flag for me. If Silver is meant to follow a more agile design process than WCAG 2, than this sounds like over-optimisation to me. If we're writing something that is supposed to go unchanged for a decade, this is the way to go. If we're building something that can be iterated upon, this isn't necessary. My suggestion would be to look for an architecture that is flexible and that can be iterated within. That includes having a predefined way of how things can get deprecated and replaced over time.

@detlevhfischer
Copy link

detlevhfischer commented Jan 23, 2019

I agree with most of what Wilco has said here.

The problem is that we don't currently have consensus on how to measure such things. That is a VERY different problem. Silver (or maybe WCAG 2.2) may need to develop new metrics. Conformance MUST be a pass or fail statement.

I would rephrase that: The CURRENT concept of conformance is that it is a pass / fail statement. As we all know, the problem is that conformance applies to a page, which often is a container for a lot of things. So there are often situations where the page fails, strictly speaking, because something on the page (possibly something very minor) failed.

If we retain the point of reference of "the page and all its states" (to keep things simple), I think the real issue is to determine tolerances - to find a way to define what kind of defect would be acceptable for something to still pass an SC. This is something which I know every evaluator has to deal with, and practically deals with on a regular basis, today often without making those tolerance choices explicit. The simplest thing would be to introduce a 'non-critical' flag - the issue is then captured for remedies and not swept under the carpet, but does not prevent conformance.

There have been attempts to create more fine-grained instance-based metrics approaches (UWEM) but in practice, these have not caught on. In my opinion, quantitative instance based metrics approaches are just too complex and messy - it might work like a breeze if everything were testable automagically, but we know that the majority of SCs need at least semi-automatic checks.

The alternative is to accept human judgment as the final arbitrator for pass / fail, and define processes where evaluator judgments (and a case base it may rest upon) can get calibrated / challenged / adapted. The advantage is that both qualitative and quantitative issues can enter the judgment - and both are usually needed when assessing actual content. It's these two questions: A. Is this issue non-critical? B. How many of these issues on the page do I accept before PASS flips over into FAIL?

So if we keep a pass/fail based concept of conformance, one useful addition might be to flag whether content conforms with minor issues (within tolerances), or conforms straight away, and possibly make that as transparent and quantifiable as possible. (Re-reading the comment above, I see that Wilco has made a similar point under 1.2.2 Measurable: 100% passing content)

@detlevhfischer
Copy link

I think we do need a way to express conformance in terms of websites and web apps.

I am not sure I understand. Can you elaborate? Do you see a category difference between them? (many websites have traits of applications). Do you envisage a different process for evaluating one or the other? At the moment the main issue in my experience is that you have to choose whether to lump states together and basically discuss them as part of one page (a process with several steps may be treated as one page) or define them as separate pages. Would you make that choice solely based on the URL characteristics? Or what are you after?

@jspellman jspellman added the section: Requirements Related to Silver Requirements document label Jan 25, 2021
@rachaelbradley rachaelbradley added internal comment from a participant and removed section: Requirements Related to Silver Requirements document labels Jan 27, 2022
@alastc
Copy link
Contributor

alastc commented Feb 1, 2022

Partly addressed by #589

michael-n-cooper pushed a commit that referenced this issue Feb 1, 2022
* Suggested wording to address Issue #41 1

* Changes from the meeting.

* Typo

Co-authored-by: Alastair Campbell <ac@alastc.com>
github-actions bot added a commit that referenced this issue Feb 1, 2022
SHA: a8b4b52
Reason: push, by @michael-n-cooper

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
github-actions bot added a commit that referenced this issue Feb 1, 2022
SHA: a8b4b52
Reason: push, by @michael-n-cooper

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
github-actions bot added a commit that referenced this issue Feb 1, 2022
SHA: a8b4b52
Reason: push, by @michael-n-cooper

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
slauriat added a commit that referenced this issue Feb 18, 2022
Updated pull request with changes to the Measurable Guidance opportunity as a result of the Silver TF discussion 18 February in response to #41
See https://www.w3.org/2022/02/18-silver-minutes.html#ResolutionSummary
@rachaelbradley rachaelbradley added the migration: other Issues that do not fall into the other three categories label Aug 29, 2023
@WilcoFiers
Copy link
Contributor Author

WilcoFiers commented Feb 13, 2024

Several comments are out of date and have been fully or partially addressed. PR #732 should address the remaining comments.

@alastc
Copy link
Contributor

alastc commented Feb 13, 2024

@alastc will add a suggestion for accessibility supported.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
internal comment from a participant migration: other Issues that do not fall into the other three categories
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants