Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Develop report for reviewing test assertions, instructions, and meta data #37

Closed
mcking65 opened this issue Jan 23, 2020 · 13 comments
Closed
Assignees

Comments

@mcking65
Copy link
Contributor

After the tests for an APG pattern are written in HTML and JavaScript in the WPT format, we need a way for people to easily read through the tests. Without a report of what is in the HTML files that hold the WPT format, you would have to read the HTML source or run the tests, neither of which is a practical solution for reviewing the test plan for a pattern or validating whether the encoded expectations are correct.

Uses:

  • Test Author needs to review test plan to ensure there are no mistakes in the tests assertions and documentation.
  • Test plans need peer review to ensure plans are complete and accurate.
  • Assistive technology developers need to review expectations for their product.

When generating the report, we should be able to choose the assistive technologies that are included. Uses:

  • Commands or instructions for a subset of applicable assistive technologies have been edited and need to be reviewed.
  • We are providing the report to a specific assistive technology developer.
@mcking65
Copy link
Contributor Author

Here is a proposal for the format of the report ...

Tests for pattern_name

  • Total number of tests: N

Test test_number: Test Title

  • Task:
  • Last revised:
  • Applies to:

name_of_assistive_tech_1

Instructions

  1. Instruction 1
  2. Instruction 2
  3. ...
  4. specific_user_instruction using the following commands:
    • Key command 1
    • key command 2
    • ...

Asserted expectations for name_of_assistive_tech_1

Assertion Priority
Assertion 1 Priority 1
Assertion 2 priority 2
... ...

name_of_assistive_tech_2

Instructions

  1. Instruction 1
  2. Instruction 2
  3. ...
  4. specific_user_instruction using the following commands:
    • Key command 1
    • key command 2
    • ...

Asserted expectations for name_of_assistive_tech_2

Assertion Priority
Assertion 1 Priority 1
Assertion 2 priority 2
... ...

@mcking65
Copy link
Contributor Author

Here's an example that uses the current partially written combobox test plan information. It currently has only 9 tests and they are so far only written for JAWS. This shows information for 3 of the tests.

Tests for Combobox

  • Total number of tests: 9

Test 1: Navigating to empty, editable combobox conveys role, name, editability, and state

  • Task: activate combobox
  • Last revised: December 17, 2019
  • Applies to: Screen Readers

JAWS

Instructions

  1. Put JAWS into Virtual Cursor Mode using Insert+Z
  2. activate the 'state' combobox using the following commands:
    • Enter

Asserted expectations for JAWS

Assertion Priority
the screen reader switches to mode that allows text input Must Have

Test 2: Navigating to editable combobox switches mode from reading to interaction

  • task: navigate to combobox with keys that switch modes
  • Last revised: December 17, 2019
  • Applies to: Screen Readers

JAWS

Instructions

  1. Put JAWS into Virtual Cursor Mode using Insert+Z
  2. navigate to the 'state' combobox using the following commands:
    • Tab / Shift+Tab

Asserted expectations for JAWS

Assertion Priority
After command, JAWS is in Forms Mode Must Have

Test 3: Navigating to empty, editable combobox conveys role, name, editability, and state.

  • task: navigate to combobox
  • Last revised: December 17, 2019
  • Applies to: Screen Readers

JAWS

Instructions

  1. Put JAWS into Virtual Cursor Mode using Insert+Z
  2. navigate to the 'state' combobox using the following commands:
    • C / Shift+C
    • F / Shift+F
    • Up Arrow / Down Arrow
    • Left Arrow / Right Arrow (with Smart Navigation set to Controls and Tables)

Asserted expectations for JAWS

Assertion Priority
The role 'combobox' is conveyed Must Have
The name 'State' is spoken Must Have
Users are informed they can edit text in the combobox Must Have
The collapsed state of the combobox is conveyed. Must Have

@spectranaut
Copy link
Contributor

Hi @mcking65 -- I have added review pages to the runner hosted here, check it out the review for the tests for checkbox and let me know if things should change!

The page for combobox is also up but those tests need to be fixed, which I'll do today. We change the test format near the end of December and the test review script also helped me identify a few mistakes I made when encoding them :)

@spectranaut
Copy link
Contributor

@mfairchild365 you might want to look at the review page for the checkbox tests above, is it useful/are the modifications that will make it more useful?

@spectranaut
Copy link
Contributor

Also I added a section to the Local Development for Runner and Tests wiki page to explain how to run the script.

@spectranaut
Copy link
Contributor

spectranaut commented Feb 12, 2020

Requests for changes from CG meeting:

  • Documentation of the set up script
  • A way to filter for only JAWS or NVDA or VoiceOver
  • When the plan was last updated? Latest update date of any of the test files... or command file...?
  • Make the links actual links

@mfairchild365
Copy link
Contributor

I noticed that the assertions are repeated for each relevant AT for each test. Should they only be listed once for each test?

@spectranaut
Copy link
Contributor

spectranaut commented Feb 13, 2020

@mfairchild365, @mcking65 originally asked for the assertions to be listed under every AT, occasionally the asserts are different for different AT, but these is only when the assertion is not related to the speech output of the AT. For example, see this file that tests mode switching in a combobox.

Feel free to suggest a way to handle this! I'm not sure what is best.

@spectranaut
Copy link
Contributor

spectranaut commented Feb 22, 2020

Things to still do:

  • documentation of the set up script
  • update the links to have friendly link text
  • use must have/should have/nice to have language instead of priority numbers

@spectranaut
Copy link
Contributor

@mcking65 I think I've done everything that has been requested by the CG, if you want to take another look: https://w3c.github.io/aria-at/review/menubar-editor.html

@mcking65
Copy link
Contributor Author

@spectranaut, looking great! I think we now have a very usable report!! On to content!

@mcking65
Copy link
Contributor Author

mcking65 commented Feb 26, 2020

@spectranaut, there are a couple more things that would be helpful.

Like I also requested for test results, add a breadcrumb trail at the top of the page: e.g.,

<nav aria-label="breadcrumbs">
  <a>ARIA-AT Home</a> &gt; <a>Test Plans</a>
</nav>

Also, we should have a link to the test page. We could have one link at the beginning, just after the H1, or, it could be part of the list at the start of each test. The only downside there is that we do not want to imply that the setup script would be run; we don't need that. That's the only reason I lean toward having it linked in only one place.

@spectranaut
Copy link
Contributor

This report has been completed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants