Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support V2 Test Format build #997

Merged
merged 68 commits into from
Nov 30, 2023
Merged

Support V2 Test Format build #997

merged 68 commits into from
Nov 30, 2023

Conversation

howard-e
Copy link
Contributor

@howard-e howard-e commented Oct 12, 2023

Preview Tests

This PR adds support for building review pages for the new v2 Test Format, along with supporting more specific requirements detailed in #977 (for making changes to the review page, not the test page).

Any work in this PR related to the v2maker files is included in #990 and should be reviewed separately there.

I've also submitted #998 which makes the v2 file formats of all the tests. The link is here.

Newly supported example feature changes to assertion priorities, tokenized assertions and refs have been set for 998's alert.

Note: The V1 Test Format is still supported.

Change to the scripts:

  • npm run build - This script will check to see if relevant v2 test format files exist to build the relevant pages first, otherwise it falls back to building with the v1 test format files.
  • npm run build-v1 - Explicitly builds the review pages using the v1 test format in verbose mode.
  • npm run build-v2 - Explicitly builds the review pages using the v2 test format in verbose mode (this assumes that valid v2 test format files exist in a pattern's directory; this can be created with npm run make-v2 {pattern} if it doesn't exist. DO NOT MERGE: Build v2 test format with "make-v2'ed" CSVs #998 has created the files created for all the directories).

Known enhancements which can be a part of this PR or added in a follow-up:

  • Handling space-separated settings but the use case it aims to solve can already be supported. Is this requirement still needed? (See v2 Test Format: Space-separated settings in AT_KEY-commands.csv #1002)
  • Capitalize single length instances of commands (letters). Capitalizing the values in commands.json may be best. (not doing)
  • Move <root>/tests/resources to <root>/resources.
  • Update the individual test page to also account for 0-level assertion exceptions.

mcking65 and others added 30 commits September 25, 2023 13:35
support.json:
* add strings to support screen reader settings and assertion tokens.
* Add URL data for ARIA and HTML-AAM specifications.

commands.json: Add all the command string representations necessary for the new V2 test format.
@howard-e
Copy link
Contributor Author

howard-e commented Nov 2, 2023

I am seeing some consistent problems across all test plans with the preview. For example, consider Color Viewer Slider Test Plan | For Pattern: horizontal-slider.

@mcking65 Before continuing (and because I haven't gotten the chance to share out the full details of the updates), there are 2 views being represented here.

This PR is using only the v1 format of the files against the newly described content of #977. #998 is a supporting PR to show what the content looks like with the actual v2 format created by the make-v2 script. I'd rather not include those v2 files in this PR, so we can leave that to the test authors as they recreate these files for each test pattern (and make updates where necessary), and to also show feature parity since we're supporting both v1 and v2 for now.

With that in mind, could you re-confirm your follow up points after going through https://deploy-preview-998--aria-at.netlify.app/review/horizontal-slider instead?

  1. Shows 21 tests. But there are only 9 tests in tests.csv.

This seems consistent with what is currently displayed at https://aria-at.netlify.app/review/horizontal-slider. What is the expectation given that they have to be used in combinations across the *-commands.csvs?

@mcking65
Copy link
Contributor

mcking65 commented Nov 9, 2023

@howard-e The number of tests is now correct in the V2 previews. The presentation of screen readers and commands under each screen reader is now correct in the V2 format.

There are a couple of strings that should be adjusted; I need to check if they are coming from the JSON. James and I discussed using details/summary elements for the settings-related instructions. I know I prototyped that, but I don't know if I specified it. I will check on that soon. These are really just cosmetic issues that will have no impact on the downstream work on the other P0 requirements in the refactor project.

Regarding the following notes you wrote about unfinished work ...

Capitalize single length instances of commands (letters). Capitalizing the values in commands.json may be best.

We are not going to capitalize them. The command "b" is not the same as the command "B". The single-letter commands are case-sensitive. "Shift+b" is the same as "B", and it is more clear to specify "Shift+b". Apple does this in its settings. If you look at the VoiceOver settings for Quick Nav, they write "B" and "b" as two different commands. We plan to specify "b" and "Shift+b" to avoid any ambiguity. Similarly, we will specify "1" and "Shift+1", not "1" and "!".

Update the individual test page to also account for 0-level assertion exceptions.

I am not sure I understand what you mean by "individual test page". Is "0:Assertion_ID" not yet supported if it is specified in AT_NAME-commands.csv?

@howard-e
Copy link
Contributor Author

Capitalize single length instances of commands (letters). Capitalizing the values in commands.json may be best.

We are not going to capitalize them. The command "b" is not the same as the command "B". The single-letter commands are case-sensitive. "Shift+b" is the same as "B", and it is more clear to specify "Shift+b". Apple does this in its settings. If you look at the VoiceOver settings for Quick Nav, they write "B" and "b" as two different commands. We plan to specify "b" and "Shift+b" to avoid any ambiguity. Similarly, we will specify "1" and "Shift+1", not "1" and "!".

Thanks for that context!

Update the individual test page to also account for 0-level assertion exceptions.

I am not sure I understand what you mean by "individual test page". Is "0:Assertion_ID" not yet supported if it is specified in AT_NAME-commands.csv?

It is supported on the review page (eg. Test 1: Trigger an alert > JAWS > Enter (virtual cursor active) > does not show the assertion 'MAY' priority, 'convey some phrase', that was included as a test) but is present for the other commands.

On the JAWS collection page, it is still shown under Enter (virtual cursor active). I'd think we don't want to display it there as well.

#1003)

* Initial rendering pass

* Initial interaction with assertion checkbox

* Cleanup and use assertionResponseQuestion

* Remove unused required validation, Ensure input fires state change event

* Use assertionResponseQuestion in v1 tests as well
Copy link
Contributor

@mcking65 mcking65 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good now. We should merge this ASAP to unblock reviews of the V2 tests.

However, we need to fast follow with two additional pull requests.

Change 1: Support for 0-priority assertions in the test collection form

It is critical 0-priority aassertions do not show up in the collection form. Example, the assertion

JAWS switched from virtual cursor active to PC cursor active

in JAWS Trigger an alert.

Change 2: Remove Success Criteria section from collection form

Similar to the reports and single page view of the test plan in the app, we need to remove the following content from the collection form:

Success Criteria

To pass this test, JAWS needs to meet all the following assertions when each specified command is executed:

  1. Role 'alert' is conveyed
  2. Text 'Hello' is conveyed
  3. JAWS switched from virtual cursor active to PC cursor active

We have removed the concept of passing a complete test; we pass/fail individual assertions but at the level of a test, we simply track the number/percentage of passing and failing assertions of each priority, so this content is no longer relevant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

None yet

3 participants