Skip to content

Regression Tests for APG Example Pages

Valerie Young edited this page Oct 4, 2018 · 22 revisions

The APG Regression Tests test the example implementations of the WAI-ARIA Authoring Practice Guides design patterns and widgets found in this repository. They are not generalizable tests of the APG design patterns or widgets because they each test one out of many possible implementations of the design pattern or widget.

The example implementations can be found under the "Example" heading for each design pattern or widget. For example, in the Checkbox design pattern description under the "example" heading, there are links to two example implementations of the Checkbox pattern:

The regression tests test the attributes and keyboard interactions as they are described on the example pages linked above. On each example page you will see tables with names similar to:

  • "Keyboard Support"
  • "Role, Property, State, and Tabindex Attributes"

Each row of these tables corresponds to a regression test. The table rows contain the attribute data-test-id, the value of which is referenced in the name of the corresponding regression test. If a test fails, it indicates a bug in the implementation of the example widget according to the description in these tables. Be sure to check, however, that the documentation does not contain any mistakes or bugs as well!

Understanding regression test results

Test names

All test names all have the format:

<test file name> › <example file location> [data-test-id="<string>"]: <test description>

For example, the test name for the test of the space key behavior for the checkbox/checkbox-1/checkbox-1.html example looks like:

checkbox-1 › checkbox/checkbox-1/checkbox-1.html [data-test-id="key-space"]: key SPACE selects or unselects checkbox

To read more about what this test is testing, look for the "space" row in the "Keyboard Support" table in the example page.

Test results

The test result are reported using the Ava default test format (see "Running tests" below to switch to TAP output).

When a test passes, the test name is prepended with a character.

When a test fails, it is prepended with a character. The error messages is appended to the end of the test name, and a detailed report of all failures is listed at the end of the file after all of the tests run.

When a test is expected to fail, the test name is still prepended with a character but the color of the name is red (when color output is possible). After all tests are run, a summary of "expected to fail" tests are listed before the details about unexpected failures (in this example, only feed/feed.html [data-test-id="key-control-home"]: key home moves focus out of feed expected to fail and failed). If a test passes when it is expected to fail, the output is as if it is a failed test with the error message: "Test was expected to fail, but succeeded, you should stop marking the test as failing."

Example output:

  ✔ feed/feed.html [data-test-id="feed-role"]: role="feed" exists (5.5s)
  ✔ feed/feed.html [data-test-id="feed-aria-busy"]: aria-busy attribute on feed element (2.1s)
  ✔ feed/feed.html [data-test-id="article-role"]: role="article" exists (2.2s)
  ✔ feed/feed.html [data-test-id="article-tabindex"]: tabindex="-1" on article elements (2.2s)
  ✔ feed/feed.html [data-test-id="article-labelledby"]: aria-labelledby set on article elements (2.2s)
  ✔ feed/feed.html [data-test-id="article-describedby"]: aria-describedby set on article elements (2.5s)
  ✔ feed/feed.html [data-test-id="article-aria-posinset"]: aria-posinset on article element (4.3s)
  ✖ feed/feed.html [data-test-id="article-aria-setsize"]: aria-setsize on article element Article number 1 does not have aria-setsize set correctly, after first load.
  ✔ feed/feed.html [data-test-id="key-page-down"]: PAGE DOWN moves focus between articles (2.1s)
  ✔ feed/feed.html [data-test-id="key-page-up"]: PAGE UP moves focus between articles (2.1s)
  ✔ feed/feed.html [data-test-id="key-control-end"]: CONTROL+END moves focus out of feed (2.1s)
  ✖ feed/feed.html [data-test-id="feed-aria-labelledby"]: aria-labelledby attribute on feed element Test was expected to fail, but succeeded, you should stop marking the test as failing
  ✔ feed/feed.html [data-test-id="key-control-home"]: key home moves focus out of feed

  2 tests failed
  1 known failure

  feed/feed.html [data-test-id="key-control-home"]: key home moves focus out of feed

  feed/feed.html [data-test-id="article-aria-setsize"]: aria-setsize on article element


   156:   for (let index = 1; index <= numArticles; index++) {       
   158:       await articles[index - 1].getAttribute('aria-setsize'),

  Article number 1 does not have aria-setsize set correctly, after first load.


  - '10'
  + '20'

  feed/feed.html [data-test-id="feed-aria-labelledby"]: aria-labelledby attribute on feed element

  Test was expected to fail, but succeeded, you should stop marking the test as failing

Running Regression Tests Locally

Environment Set Up

  1. Download Firefox (for compatibility with selenium webdriver, use version 55 or later)
  2. Clone the APG git repository
  3. In the root directory, run: npm install

Running tests

To run all tests, run:

$ npm run regression

To run a specific test, match on the name:

$ npm run regression -- --match *treeview-1a*

By default, Ava opens firefox browser windows to test. You can watch the tests happen, and add waits to slow down the test to inspect what is happening while debugging your tests (delays can be added by editing the asynchonous test function bodies with await new Promise((resolve) => setTimeout(resolve, 1000)) at appropriate points in code execution). If you prefer the test to run headless (this means the tests will not open a new browser windows for every example page), you can specify the CI environment variable.

To run in firefox headless mode:

$ CI=1 npm run regression 

To run using TAP format for test results instead of the default Ava testing format, specify the --tap option:

$ npm run regression -- --tap

Test coverage report

To run the test coverage report:

$ npm run regression-report

The test coverage report will report three things:

  1. Examples without regression tests: Any example pages which have no associated test file until the test/tests/ directory.
  2. Examples missing regression tests: Any example pages which have an associated test file but some tests are missing (there is not a test for every data-test-id found on the example page). The missing data-test-ids will be listed. Only missing tests in this category produce a non-zero exit code and a failure in the CI.
  3. Examples documentation table rows without data-test-ids: Any example pages that have rows in the "Keyboard Support" or "Role, Property, State, and Tabindex Attributes" table WITHOUT a data-test-id attribute.

The output of the report:

Examples without regression tests:


Examples missing regression tests:


Examples documentation table rows without data-test-ids:

    "Keyboard Support" table(s):
       Shift + Tab
       Command + S
       Control + S
    "Attributes" table(s):


  41 example pages found.
  2 example pages have no regression tests.
  1 example pages are missing approximately 2 out of approximately 572 tests.


  Please write missing tests for this report to pass.

Configuration of Regression Report

Example pages: ignoring table rows with data-test-id="test-not-required"

If there is a table row in a "Keyboard Support" or "Role, Property, State, and Tabindex Attributes" table that should NOT have a corresponding test, then use the test-not-required data test id. For example, this attribute is used in the alert example because the attribute aria-live="assertive" is implicit on an element with role="alert". Because it is implicit, it is not testable.

Example pages: ignoring files or folders

The report script looks for .html files in the example directory in order to find the example pages. If a .html file in the example directory is not an example page, then you can ignore that file by adding the file path after example directory to the following file:

  • test/util/report_files/ignore_html_files

If you would prefer to ignore a whole directory (such as the landmarks directory), you can add the path to the directory after the example directory to the following file:

  • test/util/report_files/ignore_test_directories

Example pages: finding the documentation tables

The "Keyboard Support" and "Role, Property, State, and Tabindex Attributes" tables are discovered by looking for the def class or attributes class on a table element. These classes are currently only used for these documentation tables, if this changes, the report will produce erroneous results.

Test existence

As the test names are compiled during runtime, the test name are gathered by running npm run regresison with the environment variable REGRESSION_COVERAGE_REPORT set. In this scenario, all tests files will run without starting geckodriver or firefox and all tests will fail immediately instead of running the body argument of avaTest.

Writing Tests


The test use the Ava API for assertions. The tests use Selenium WebDriverJS API to run tests in the browsers. Geckodriver is the technology that receives and acts upon instructions from Selenium.

Example test

  'Right arrow increases slider value by 1',         // Short description
  'slider/slider-1.html',                            // Example file
  'key-right-arrow',                                 // data-test-id
  async (t) => {                                     // Declarative test

    t.plan(1);                                       //

    const sliders = await t.context.session.findElements(By.css(ex.sliderSelector));

    // Send 1 key to red slider
    const redSlider = sliders[0];
    await redSlider.sendKeys(Key.ARROW_RIGHT);
      await redSlider.getAttribute('aria-valuenow'),
      'After sending 1 arrow right key to the red slider, the value of the red slider should be 1'


  • We use Ava for test framework and assertions.
  • We wrap the Ava test call with our own ariaTest.
    • Before running the test body, ariaTest will:
      • Navigate to the the example page
      • Verify the referenced data-test-id exists in the example page
  • Start each test body with a t.plan()
  • Access assertion API through t: the Ava execution object.
  await redSlider.getAttribute('aria-valuenow'),
  'After sending 1 arrow right key to the red slider, the value of the red slider should be 1'


The ariaTest API:

 const ariaTest = function (desc, page, testId, body)

 Declare a test for a behavior documented on and demonstrated by an
 aria-practices examples page.

 @param {String} desc - short description of the test
 @param {String} page - path to the example file
 @param {String} testId - unique identifier for the documented behavior
                          within the demonstration page. See 
                          attribute `data-test-id`.
 @param {Function} body - script which implements the test


The path to the example page after example/. ariaTest will build the location of this file, assign it to t.context.url, and load it into the browser session.


In the example page, each tr element in the "Keyboard Support" and "Role, Property, State, and Tabindex Attributes" tables should have data-test-id corresponding to this testId value. The test will error if the data-test-id cannot be found. The rows with that data-test-id contains the documentation for the test contained in the ariaTest function.

There can be one or more ariaTest function calls that refer to a single data-test-id in a single file.


The body function MUST accept one argument: t. t is the Ava execution object that contains the Ava Test API. It also contains the selenium session for this tests file under t.context.session, which is an object of class WebDriver (see WebDriver Class documentation).


  • We use SeleniumJS to interact with and query the state of the web browser.
  • Selenium talks with Firefox.
    • We use firefox because it has a headless option we can specify when running in the browser.

Selenium API

WebDriver Object

Our test framework starts a single session for each example widget and a series of tests are then run. The session is a WebDriver object and can be reach in the individual tests via the Ava execution object: t.context.session. You use the WebDriver object to interact with the browser in various ways.


Selenium can also be used to send scripts to execute within the browser.

let attributeExists = await t.context.session.executeScript(
  async function () {
    const [selector, attribute] = arguments;
    let el = document.querySelector(selector);
    return el.hasAttribute('attribute');

Or, to avoid race conditions using .wait:

await t.context.session.wait(
  async function () {
    let newfocus = await t.context.session
     return newfocus != originalFocus;
  'Timeout waiting for "aria-activedescendant" value to change from: ' + originalFocus

WebElement Object

The WebElement is used by Selenium to represent dom elements. You can send clicks or keys to WebElements and query for information about them.

const textboxElement = await t.context.session.findElement(By.css(ex.textboxSelector))
const listboxElement = await t.context.session.findElement(By.css(ex.listboxSelector))

await textboxElement.sendKeys(Key.ARROW_DOWN);

  await listboxElement.isDisplayed(),
  'In listbox should display after ARROW_DOWN keypress'

Key Object

const { Key } = require('selenium-webdriver');
await element.sendKeys(Key.ARROW_DOWN);
await element.sendKeys(Key.chord(Key.CONTROL, Key.HOME));

By Object

const { By } = require('selenium-webdriver');
const button = await t.context.session.findElement(By.css('[role="button"]'))
const example = await t.context.session.findElement('ex1'))

Test files

Each test file under test/tests/ corresponds to a single example page. Each test file contains multiple tests wrapped in an ariaTest calls. The body argument of the ariaTest is a declarative test of a behavior described in the example.

Writing declarative tests

  • All ariaTest test bodies should begin with t.plan(n) where n is the number of assertions in the tests. Although tedious to count when writing the test, this "plan" will help prevent false positives in your tests and help you to be confident that every assertion you wrote was executed and passes. Read about Ava assertion planning here.
  • Test should be well commented. The order of interactions with the browser should be clear while reading the test in case a test fails and a contributor needs to reproduce the error.
  • All Ava assertions should included clear error messages that can double as documentation of the test.
  • All tests are asynchronous. You might need to use the selenium wait (t.context.session.wait) wrapper to wait on a dom change before making an assertion, as an assertion may be made faster than the dom will update. Read more about asynchronous browser testing here.

Is your test failing from a bug in the example?

Let's say you are writing tests for the aria 1.0 combobox with auto-complete example. After writing a test for "Up Arrow" in the Listbox popup table, you notice that the aria-activedescendant attribute fails to update properly. Visually, the focus moves from the first option to the last option after sending the "Up Arrow" key, but your test fails because it tests for the appropriate change in the aria-activedescendant value.

What do you do? You don't want to check in a failing test -- if you do, you'd cause the CI to fail for every subsequent pull request on the repo. So instead..

  1. Report the bug. For an example, see this reporting of the bug describe above.

  2. Use Ava's expected failing functionality. Call ariaTest.failing instead of ariaTest, and make sure to leave a comment with the issue describing the bug. The test will be run on each subsequent run of Ava, but if it fails as expected the result will be ignored. If the test suddenly passes, this will be reported as a failure with the message: Test was expected to fail, but succeeded, you should stop marking the test as failing.

// This test fails due to bug:
ariaTest.failing('Test up key press with focus on listbox', exampleFile, 'listbox-key-up-arrow', async (t) => {

Test page global constants


This constant refers to the example page that will be tested by this test file.


All html selectors and example page specific structure should be contained in the ex global object declared at the top of the page. This should ease the work of future maintainers. If the html structure in an example page changes, it should be easy to update the tests by primarily updating the ex global object.

other global helper functions

Functions that will ease the tests of this particular example page. These functions assume the HTML structure of the specific page and may need to be updated if the example HTML is changed.


Tests all rely on the following imports:

const { ariaTest } = require('..');
const { By, Key } = require('selenium-webdriver');

As well as any number of helper functions from the test/util directory.

test/util/ assertions

All utility functions count as one assert for t.plan() calculations. See the file under test/util for further documentation

  • assertAriaActivedescendant (t, ariaDescendantSelector, optionsSelector, index)
  • assertAriaControls (t elementSelector)
  • assertAriaDescribedby (t elementSelector)
  • assertAriaLabelExists (t, elementSelector)
  • assertAriaLabelledby (t elementSelector)
  • assertAriaRoles (t, exampleId, role, roleCount, elementTag)
  • assertAriaSelectedAndActivedescendant (t, ariaDescendantSelector, optionsSelector, index)
  • assertTabOrder (t, tabOrderSelectors)
  • assertRovingTabindex (t, elementsSelector, key)
  • assertAttributeDNE (t, selector, attribute)
  • assertAttributeValues (t, elementSelector, attribute, value)

Writing test/util/ assertions

  • Assertion util functions should include only one call to t.pass() in order for the call to count as one assertion in the calculation of t.plan().
  • Use the native node assert library for multiple assertions. All requirements under the section "Declarative test norms (ariaTest)" apply here as well.
Clone this wiki locally
You can’t perform that action at this time.