diff --git a/methods/functional-images/tests.html b/methods/functional-images/tests.html index 06be64b0..fd81a29a 100644 --- a/methods/functional-images/tests.html +++ b/methods/functional-images/tests.html @@ -37,7 +37,11 @@

Procedure for HTML

Expected Results

-

Check #2 and #3 are true.

+ +

Procedure for Technology Agnostic

@@ -51,7 +55,11 @@

Procedure for Technology Agnostic

Expected Results

-

Checks #2 and #3, or #2 and #4, or #2 and #5 are true.

+ +

@@ -77,7 +85,11 @@

Procedure [for HTML]

Expected Results

-

Checks #1 and #2 are true

+ +

Expected Results

-

Checks #2 and #3, or #2 and #4, or #2 and #5 are true.

+
@@ -109,4 +124,4 @@

Holistic Tests

- \ No newline at end of file + diff --git a/methods/text-equiv/tests.html b/methods/text-equiv/tests.html index cb0caad8..ff5a2d3d 100644 --- a/methods/text-equiv/tests.html +++ b/methods/text-equiv/tests.html @@ -23,19 +23,18 @@

Atomic Tests

  • Check that the captions can be turned on and off
  • + +

    Rating:

    -

    We want public feedback about whether Open Captions (burned in captions) pass for Bronze level. Open captions are not text and cannot be customized for people with low vision or routed to a braille display for people who are blind-deaf.

    diff --git a/requirements/index.html b/requirements/index.html index 3544c05a..0e95b33b 100644 --- a/requirements/index.html +++ b/requirements/index.html @@ -109,7 +109,7 @@

    Usability

  • Advocacy Tool: Usability oriented toward a broad audience for WCAG 3.0 can help improve the general advocacy of digital accessibility. Improving the reach of WCAG 3.0 can help improve the awareness of accessibility considerations. Compelling information that is contextually relevant to the standards may also aid in convincing audiences of any type.
  • -
    +

    Conformance Model

    There are several areas for exploration in how conformance can work. These opportunities may or may not be incorporated. Then need to work together, and that interplay will be governed by the design principles

    -
    +

    Maintenance

    • @@ -130,6 +130,47 @@

      Maintenance

      Governance: Utilize tools that allow interested parties to predict when issues important to them are being discussed. Maintain a backlog that reflects issues along with their status.
    +
    +

    Issue Severity

    + +
    +

    This section is exploratory. Outstanding questions that need to be addressed include:

    +
      +
    1. What to do with non-critical issues?
    2. +
    3. How to deal with people having have different ideas on what is critical?
    4. +
    5. How do we incorporate context/process/task? Is that part of scoping, or issue severity? Both are important to the end result.
    6. +
    7. Can the matrix inform designation of functional categories? For example, the Text Alternative Available outcome.
    8. +
    +
    + +

    The Issue Severity Subgroup:

    + +
      +
    1. Demonstrated that it is possible to categorize tests by severity. See the critical severity worksheet for this information.
    2. +
    3. Added examples of critical failures to the tests for Text Alternative Available and Translates Speech And Non-Speech Audio.
    4. +
    5. Severity rating is best done at the test level. The higher the level the impact is assessed at, the less it aligns with the experience.
    6. +
    7. It would be best to incorporate task/context for the best alignment. This does depend on scoping and how to define the task/process.
    8. +
    9. It will be a lot of work to categorize each test with an impact level and the functional needs affected. It maybe best to focus on the critical issues. It would also be best to do the categorization as we go along, when creating tests.
    10. +
    11. Severity rating could contribute towards scoring, but there are many other questions to do with scoring.
    12. +
    13. Severity rating could also contribute towards prioritization, which could replace A/AA/AAA (at the test level)
    14. +
    + +
    +
    Proposed updates
    + +
      +
    • For the process of creating this: focus on critical issues. Don't try to rate high / medium / low.
    • +
    • Use the critical severity matrix to categorize tests. (At least set up that process to define each test by functional need and impact).
    • +
    • We could use Critical / High for a level of conformance. For example: +
        +
      • "Bronze" could be an absence of any critical or high issues;
      • +
      • "Silver" could be an absence of any critical, high, or medium issues.
      • +
      +
    • +
    • We could use the severity scales as input to a post-testing process where you prioritize issues based on your context and tasks.
    • +
    +
    +

    Design Principles