Run competetive Accessibility tests and send them to a Graphite db
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
k8s
reports
src
.editorconfig
.eslintrc.json
.gitignore
Dockerfile
LICENSE
Makefile
README.md
index.js
package-lock.json
package.json
yarn.lock

README.md

a11y-dashboard-connector

Run competetive Accessibility and Quality checks and send them to a Graphite db

Quickstart for local development

Do the yarn install and yarn start. The tests will run and print some results to your console.

Next, go ahead and play with the scripts. Output is generated to the console and into the reportsfolder. Whatever crazy thing you invent – if you are inside the ZON network, your stats will instantly be available on http://sitespeed.zeit.de/dashboard/db/accessibility-dashboard?refresh=5m&orgId=1 .

High level overview

  • Run with yarn start which only does node index.js.
  • The index.js only itereates over a list of urls and starts "checks" with them.
  • Each check is a module inside the checks folder.
  • A check runs one tool (eg html-validator, pa11y) for each URL given, and handles the result on its own. "Handling" can mean anything, but in most cases it is
    • Filter and name the results (eg counting errors of a certain type).
    • Store the result in a folder, for manual inspection.
    • Send the result to graphite. Therefore, an object is built which represents the namespace where the metrics will be found in graphite/grafana (eg htmlvalidator.zeit-de.homepage.stats.errorCount).
  • Storing and sending is done by helper functions, which are found in the shared "utils" folder.

Run & Deploy

Deployment Prerequisites

  • Install gcloud SDK
  • Init gcloud SDK
  • Install kubectl
  • Connect to the K8s Cluster
    • therefore run gcloud container clusters get-credentials zon-misc-prod-1 --zone europe-west3-a --project zeitonline-gke-misc-prod in your commandline
  • Set the Kubectl Context
    • therefore run kubectl config set-context a11y-connector-production --cluster=gke_zeitonline-gke-misc-prod_europe-west3-a_zon-misc-prod-1 --user=gke_zeitonline-gke-misc-prod_europe-west3-a_zon-misc-prod-1 --namespace=a11y-connector in your commandline
  • Use the Context
    • run kubectl config use-context a11y-connector-production in y our commandline

Build Docker Image and Deploy to K8s

Command What's happening?
make build generate new Docker-Image with current revision
make test run most recent Docker-Image based on revision
make k8s deploy most recent Docker-Image based on revision to Kubernetes-Cluster as a Cronjob

The Cronjob is accessible through the Kubernetes-Dashboard (view logs etc.)

Checks

  • Pa11y (GitHub, npm)
  • Webcoach (GitHub, npm) – zurzeit deaktiviert weil der Puppeteer/Browser nicht tut.
  • HTML-Validator (GitHub, npm)

Hands-on: creating a new check

Step-by-step protocol of creating a new metric on our dashoard.

First, I install the npm module I want to use and copy an existing check.

yarn add cssstats
cp -r ./src/checks/html-validator ./src/checks/cssstats

Then, I modify the check to be run in the most simple form to inspect its output. Even though you probably already know what you want from a module.

// ./src/checks/cssstats/check.js
const validator = require('cssstats')
exports = module.exports = {}
exports.run = function run (siteName, siteType, url) {
	return validator({
		url: url
	})
	.then((results) => {
		console.log(results)
		return {}
	})
	.catch((error) => {
		console.error(error)
	})
}

I use this new check inside the index.js file to start it.

const cssstats = require('./src/checks/cssstats/check')
const site = 'zeit-de'
for (let type in URLS[site]) {
  const url = URLS[site][type]
  cssstats.run(site, type, url).then(() => {
    console.log(`Finished cssstats for ${url}`)
  })
}

I also commented-out the existing checks to speed up testing, and only check the zeit.de sites.

Now you may run yarn start.

To-do

Make this usable for others

  • The list of urls should come from a config file
  • Command Line Interface (params: which sites, graphite url)

Feature Ideas

  • Make the test result/console output available, if possible. We want know instantly, what "n HTML errors on page X" means.
  • axe for analyses: reports good parse-able advice (Violation of "color-contrast" with 108 occurrences!)
  • exclude elements like ads (look for "hide elements" in https://bitsofco.de/pa11y/) ... if this makes sense. Maybe as an extra report: issues with and without ads ?