-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for simplifying user interface and test cycle administration #158
Comments
I think this is fine for V1 and I don't want my following comments to delay the development of V1. However, I did have some thoughts as I have been processing this. I think it might be worthwhile to take a step back and review how this fits in the process as a whole. From the working mode document:
Having two testers run the same test effectively doubles the effort (and potentially cost) required to produce reports. Is this level of effort and thoroughness necessary in order to produce sufficient results, that will be reviewed yet again by the AT developer? If we have limited resources, and we need to review the test results internally before sending them to an AT developer to review them, is there a more efficient way to review the results? For example, publish a draft of the results and ask the community as a whole to review them? Or perhaps, re-test only a sample of the results to identify where we should focus review and re-testing efforts? |
It seems it assumes skills of testers beyond the ability to run tests -- copy and paste into a text editor, understand a text-based diff format, make a decision about whether the difference needs discussion or if they should just change their own result to match that of the other tester. Can we assume those things? Can the tester choose to ignore the difference and continue to the next test (without filing an issue)? If yes, how do we deal with it? |
The functionality described in this issue, or variations on it, have been implemented in the test runner. Closing the issue. |
The pressing need to provide feedback on Isaac's work on the wireframes has led me to think longer and more deeply about our discussions of process for arriving at consensus on results of tests so we can share them in reports. V1 of this UI is going to be built throughout the next 3 weeks. In our April 1 meeting, it seemed there was strong support for:
Designing this UI gets into the nitty gritty details of real life for steps 11-13 of the process in our working mode document:
I have been thinking about:
I've some ideas to float with the group. I'll share them in story form to hopefully make them easy to understand.
Scenario 1: Tester1 and tester2 resolve result differences via GitHub and Admin publishes draft report
Tester1 and tester2 are assigned to run the latest test plan for the simple checkbox example with JAWS and Chrome. Let's say that plan has 16 tests. In this scenario they get different results for the first test of 16 and the difference is resolved via discussion in a GitHub issue. Here is how life unfolds for them.
Note: For simplicity, only some system actions are described in this story.
Observations
Scenario 2: Result differences resolved via GitHub and Admin makes corrections and publishes draft report
This scenario is identical to scenario 1 except that after consensus on how to resolve differences is reached, tester1 is not available to make changes. So, the admin completes the process of editing tester1 results and then making the draft report.
Starting with the admin resolving differences:
Functions to support other scenarios
Discarding data:
Change/Remove assignment:
With these functions for discarding data and changing assignments, the system can accommodate a wide variety of scenarios, such as:
Publishing final reports
To get to the final report, the working mode summary is:
If the basic requirement is the AT developer agrees with the data, one practical path might be:
Note: the AT Vendor Tester would have the same resolution paths, e.g., create an issue, discuss offline and then come back and revise, etc.
That might work for some AT vendors. Others might just prefer to review the draft report and press some kind of agree button. Both paths would be possible.
Summary
The center-piece of all of this is that:
These 2 ideas should reduce the number of screens, and I believe that will simplify both the experience and the build.
Finally, the quick and dirty V1 way of seeing result differences only in a formatted text string that can be easily copied enables a lot of ways of working (that are fully accessible) and avoids building any kind of diffing/comparing UI. That same string can easily be formatted with markdown and used in the GitHub comments.
The text was updated successfully, but these errors were encountered: