Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reset state within a test #358

Open
zcorpan opened this issue Dec 16, 2020 · 3 comments
Open

How to reset state within a test #358

zcorpan opened this issue Dec 16, 2020 · 3 comments
Labels
test-runner tests About assistive technology tests

Comments

@zcorpan
Copy link
Member

zcorpan commented Dec 16, 2020

In #349 (comment) I asked:

Is the list of commands above match how we think a tester ought to run the test (when running manually)? I've made the assumption that a tester would want to "reset" between different key commands to perform the task, and used "h" and "Shift+h" for that to navigate to a heading.

This applies both when testing manually and in an automated setup.

@jscholes replied that it may be better to reload the test.

I think we haven't come to a conclusion on this, so opening a dedicated issue.

@jscholes
Copy link
Contributor

@zcorpan Thanks for opening this. I think expecting human testers to infer the right way to go about testing multiple commands is fair when we're dealing with navigation to a specific element, or with simpler tests. My concern is that as the patterns under test become more complex, the way testers will have to reset becomes more challenging. For example:

  • We ask testers to transition a control from state A to state B using two commands, Space and Enter. It is implied that once the control is in state B after the first command, the tester will have to place it back in state A to carry out the second. This is not as obvious as just finding a heading, and may cause testers to inadvertently test the transition from state B to state A with the second command.
  • We use a setup script to carry out several steps, such as expanding a combobox and placing focus on a specific item within the popup, or opening multiple nested modals. We do include a description of the setup script's actions, but it's not guaranteed that testers will follow them correctly or at all. In essence, a setup script crafted to increase consistency will only be executed for the first command in some cases.

I know there is some school of thought that setup script usage should be limited, and that we should trust human testers to put up some of the scaffolding themselves. But for automation, those steps will need to be scripted regardless, so I'm in favour of setup scripts assisting human testers to the greatest feasible degree.

There are multiple options that come to mind:

  1. Make each command its own test, so that users will be forced to open the test page from scratch each time. This would be a usability nightmare for human testers considering that some tests have up to six commands.
  2. Ask testers to reload the page between commands. At present, I don't believe setup scripts to be executed when just pressing F5 or similar, so that would need to be resolved if that is the case.
  3. Prompt testers to explicitly close the test page between each command, perhaps by reorganising test pages to include the button to open the page multiple times rather than just once.
  4. Give human testers explicit instructions about how to reset state between commands. I'm not a fan of this, as we'll either need to:
    • provide two set of these instructions, one for automation and one for humans; or
    • come up with an abstracted format which is machine-readable for automation but can also output human-readable instructions for testers.

The above are just some initial ideas. I'm sure there are additional aspects to consider.

@zcorpan
Copy link
Member Author

zcorpan commented Dec 17, 2020

The test could have a "reset" button, which could execute JS if needed, or otherwise be a starting position between commands.

@zcorpan
Copy link
Member Author

zcorpan commented Dec 18, 2020

In the CG call yesterday there was general consensus that a button in each test seemed like a good idea, and it could replace the current "setup scripts".

Minutes: https://www.w3.org/2020/12/17-aria-at-minutes.html#item05

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
test-runner tests About assistive technology tests
Projects
None yet
Development

No branches or pull requests

2 participants