-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Develop assertion model for screen reader testing #5
Comments
Proposing a less gramular model where:
Attaching a spreadsheet with with examples. So far, only includes checkbox testing. |
Started testing menubar and experimented with some other ideas on presentation. See attached workbook with new tab for navigation menubar. |
Made more progress refining naming tests and defining assertions. Also made more friendly column (less technical) column names in spread sheet for menubar. Added more columns for notes about results of tests. See menubar tab in attached workbook. |
This is looking good @mcking65 - some comments:
|
One further thought: It might be possible to use this simplified data format for collection purposes, then convert the data to fit a more granular data storage, which would include detailed information such as exactly which attributes and commands failed, etc. This detailed information could then power a more granular and flexible frontend. However, this would require another manual step after initial data collection, and would depend on the quality of the notes taken. |
A couple of thoughts. I recently put together a test for a11ysupport.io and tested 10 different AT/browser combinations. The whole process probably took 3 hours. 1 hour to set up the test (write expectations and an HTML example), and 2 hours to test against all of the at/browser combinations. This was with very granular expectations for each feature and support details including output and notes tied to multiple commands for each expectation. I thought this was fairly quick. Some of the speed is probably due to the fact that I'm familiar with the system and screen readers that I tested. However, I didn't attempt to test every possible command provided by the screen reader. My goal was to identify basic support instead of comprehensive support. If I tested every command or even the majority of possible commands, I'm sure it would have taken a lot longer. All of this is to say, I think it might be worthwhile to think about how many commands we want to test, and how we can address efficiencies at the UI layer (copy repeated output, etc) and still provide granular information in the data model. |
For a future reference, I'm attaching assertions for menubar and checkbox as follows. Experimental AssertionsNavigation Menubar (Aug. 5th) Checkbox (May 29th) |
@mcking65 and I had a useful conversation about this on the call today. The minutes are here: https://www.w3.org/2019/08/28-aria-at-minutes.html There is a mistake in my notes:
I believe that Matt wasn't talking about the "interface/table that a tester uses", but more about the data model. Later in the notes, it becomes clear that we'd like interface/table that testers use to be much simpler, and only progressively disclose extra fields based on their answers. This is described later in the notes. |
I think this is mostly covered in the working mode doc, which is tracked in issue #41 . Closing this issue -- please reopen or comment if something here is not yet addressed. |
Objective: Determine how gramular to make assertions when testing screen reader support for an element or widget pattern.
The text was updated successfully, but these errors were encountered: