Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test writing process improvements #85

Open
spectranaut opened this issue Feb 26, 2020 · 4 comments
Open

Test writing process improvements #85

spectranaut opened this issue Feb 26, 2020 · 4 comments
Assignees
Labels
enhancement New feature or request tests About assistive technology tests

Comments

@spectranaut
Copy link
Contributor

spectranaut commented Feb 26, 2020

Currently, the process for converting spreadsheets to tests is pretty wonky, and it took me hours to convert Matt's combobox tests using Jon's scripts. The process as we have it right now:

  1. Test must be written in an excel format that is not documented or automatically validated (so if the formats wrong you have to manually debug it) that must be exported to CSV every time you update it.
  2. Use Jon's two python scripts to convert to test files and command.json file (during which, there is also no data validation, so you might convert using an incorrect string for "applies to" or the generic "task" or "key" commands).
  3. Use npm review tests (a node script) to create test plans from the test files.

I'd like to propose we do the following instead:

  1. Have one script that takes the EXCEL SHEET and produces the test file and test plan summaries, and does automatic data validation and tells you exactly which data is wrong or missing.

I didn't do this yet because I wanted people to have experience writing tests before we locked down a test writing process. So my main question is: Do we want to write tests in excel sheets that Jon and Yohta designed, and Matt has now used? If we do this, we will have to be very strict with the excel sheet format, but the test conversion script can also tell you if your excel sheet is formatted incorrectly.

@mfairchild365
Copy link
Contributor

Will authoring tests in Excel be the long term solution? If not, it may be prudent to start work on an alternative sooner rather than later.

@mcking65
Copy link
Contributor

@mfairchild365 commented:

Will authoring tests in Excel be the long term solution? If not, it may be prudent to start work on an alternative sooner rather than later.

I've been giving this some thought. At first, I thought Excel might be a good solution for at least the medium term, but after having gone through the process of reviewing test plans, I really think not.

Writing test plans is hard, even harder to get right than I anticipated. There is going to be lots and lots and lots of revising, even after we have significant experience writing them. I am anticipating that we will get quite a bit of feedback on the plans. So, if we had to rely on Excel imports to do all that work, it would get very time consuming.

The number of times I wished to myself, "Dang, where is the edit button for this test" is already pretty significant.

Our current test plan review pages are pretty nice. I can easily imagine an "Add Test" button next to the H1 at the top, and an "Edit test" button next to each H2. In short, I think we need to invest in a test composer/editor.

In the meantime, @spectranaut, can you give @jongund some requirements for modifying his script so that we can at least make importing reasonably efficient in the near term? That would enable us to keep moving forward with test writing until we have the infrastructure in place to build a test composer and keep you focused on building that infra.

@spectranaut
Copy link
Contributor Author

spectranaut commented Mar 2, 2020

Ok, advice for @jongund about the script:

  1. The scripts Jon wrote should be moved to scripts/ directory so they can be used for generating more tests.

    a. This will involved updating the scripts relative paths, specifically, where the test files and commands.json files are written to. Probably this should be a command line variable that tells you the directory name for the design pattern (like "checkbox" or "menubar-editor"). Then, if the exported CSV files are always named the same thing and kept in the data/ directory you won't have to provide them as a command line variable (because you can just look for a file of that name).

    b. You could just have one script, pass it the directory for the example, and it could look for all the appropriate files and create all the appropriate files. It would remove one step of work!

  2. The following data validation should be done for the tests:

    a. The commands.json file should be loaded by the script that produces test files, and it should make sure that every "task" has a corresponding entry in commands.json (also cross referencing the "applies to" field).

    b. There should be data validation on the string values provided for "applies to" field.

    c. There should be data validation on the string value for "mode"

    d. There should be data validation on the key strings -- the ones that correspond to key or key combinations in tests/resources/keys.mjs. This file could probably be consumed and read by the python script for this data validation.

  3. All tests should be delete by the script before they are re-written. This is necessary because if a file is deleted from the excel/CSV file, it needs to be deleted from the directory.

@jongund
Copy link
Contributor

jongund commented Mar 4, 2020

@spectranaut

Thank you for the feedback.
Valarie I will work on theses changes.

@spectranaut spectranaut self-assigned this Mar 11, 2020
@zcorpan zcorpan added the tests About assistive technology tests label Mar 11, 2020
@mcking65 mcking65 added the enhancement New feature or request label Aug 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request tests About assistive technology tests
Projects
None yet
Development

No branches or pull requests

5 participants