-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define the role of automation #909
Comments
It looks like I'll need more than "just a moment" to create that event. I hope to post it here within 24 hours. |
At the end of today's meeting, @mcking65 expressed a desire for AT responses to be collected by a human "at least once." I'd like to nail that down a little further (here in this discussion thread if possible, or during next week's one-off meeting if not). First: does a manual collection for one AT satisfy the requirement for ALL ATs? In other words, do we want to require that AT responses are manually collected "at least once [for each AT]" or "at least once [for all ATs, collectively]"? (I have a similar question for web browsers.) Second: should test modification refresh the need for manual validation? Maybe another way to think about this is to consider which component (or group of components) the manual collection is intended to validate: the tests, the automation system, the browsers, or the ATs. |
The meeting minutes are available on w3.org. The full IRC log of that discussion<jugglinmike> MEETING: ARIA-AT Community Group Automation Workstream |
@mcking65 as per our conversation on 2023-03-27, I've updated the name of this issue and added a checklist describing the next steps. Does that reflect your understanding of the work ahead? |
@mcking65 In addition to the above, I'm wondering if the following change makes sense for the app. Currently, the app includes "unexpected behaviors" in its description of test results. If I understand the new terminology correctly, it would be appropriate to call these "unexpected responses." That would be an improvement in my mind for a couple reasons:
What do you think? |
Since the inception of ARIA-AT, its participants have anticipated that some aspects of testing ATs will be automated. That general expectation has been sufficient to drive development of the requisite automation standards and tools, but it is too vague to inform the process of delivering AT interoperability reports. Indeed, over the years, participants have expressed a variety of expectations regarding the role of automation.
During today's meeting, we started discussing the extent we intend to integrate automation into the Working Mode.
We started off by recognizing that the term test result (as we have been using it) actually describes two different pieces of information:
In other words: a "test result" is comprised of an AT response and an assertion analysis.
We still had (at least) two open questions when the meeting commenced:
This issue is intended to augment the meeting minutes and to host public asynchronous conversation. Folks in attendance today informally planned to hold a one-off meeting to continue the discussion synchronously (I'll post an invitation here in just a moment).On 2023-03-27, we identified the following tasks:
An observation from @mcking65 during that meeting: "Just like the working mode doesn't say how to run the [ARIA-AT App] website, it won't say how to run the automation."
The text was updated successfully, but these errors were encountered: