Test specs

Marcel Duran edited this page Mar 18, 2014 · 11 revisions
Clone this wiki locally

Test specs allow any Test Result coming from either results command directly or from sync tests with --poll or --wait options to be asserted by comparing the actual result with expected results defined by a spec JSON string or file.

JSON Test Specs

Specs are defined following Test Results JSON output whereas response.data tree structure returned from results command is traversed and all matching leaves from test specs definition are compared. e.g.:


  "median": {
    "firstView": {
      "requests": 20,
      "render": 400,
      "loadTime": 3000,
      "score_gzip": {
        "min": 90

By running:

$ webpagetest test http://staging.example.com --first --poll --specs testspecs.json

After tested and returning the following test results:

  "response": {
    "data": {
      "median": {
        "firstView": {
          "requests": 15
          "render": 500,
          "loadTime": 2500,
          "score_gzip": 70

It is compared to testspecs.json and the output would be:

    ✓ median.firstView.requests: 15 should be less than 20 
    1) median.firstView.render: 500 should be less than 400
    ✓ median.firstView.loadTime: 2500 should be less than 3000 
    2) median.firstView.score_gzip: 70 should be greater than 90

  2 passing (3 ms)
  2 failing

With exit status:

$ echo $?


By default all comparisons operations are < (lower than), except when an object is informed with min and/or max values, in this case the operations used for comparison are > (greater than) and < (lower than) when both min and max are informed than a range comparison is used.


Lower than comparison

{ "median": { "firstView": {
  "render": 400


{ "median": { "firstView": {
  "render": { "max": 400 }

Greater than comparison

{ "median": { "firstView": {
  "score_gzip": { "min": 75 }

Range comparison

{ "median": { "firstView": {
  "requests": { "min": 10, "max": 30 }


It is possible to optionally define default operations and label templates inside defaults property in the specs JSON file:

  "defaults": {
    "suiteName": "Performance Test Suite for example.com",
    "text": ": {actual} should be {operation} {expected} for {metric}",
    "operation": ">"
  "median": { "firstView": {
    "score_gzip": 80,
    "score_keep-alive": 80

Test suite name and specs texts will be used in the test output and both scores should be greater than 80 as follows:

Performance Test Suite for example.com
    1) 70 should be greater than 80 for median.firstView.score_gzip
    ✓ 100 should be greater than 80 for median.firstView.score_keep-alive

  1 passing (3 ms)
  1 failing

If defaults property is omitted, the following properties are used:

  "defaults": {
    "suiteName": "WebPageTest",
    "text": "{metric}: {actual} should be {operation} {expected}",
    "operation": "<"

Text template tags

  • {metric}: metric name, eg: median.firstView.loadTime
  • {actual}: the value returned from the actual test results, eg: 300
  • {operation}: the long operation name, eg: lower than
  • {expected}: the defined expected value, eg: 200


  • < → lower than
  • > → greater than
  • <> → greater than and lower than (range)
  • = → equal to

Single overrides

Overriding individual specs labels is also possible by providing text in the spec object:

{ "median": { "firstView": {
  "loadTime": {
    "text": "page load time took {actual}ms and should be no more than {expected}ms",
    "max": 3000

Which outputs:

    ✓ page load time took 2500ms and should be no more than 3000ms

  1 passing (2 ms)


WebPageTest API Wrapper Test Specs use Mocha to build and run test suite. The following reporters are available:

  • dot (default)
  • spec
  • tap
  • xunit
  • list
  • progress
  • min
  • nyan
  • landing
  • json
  • doc
  • markdown
  • teamcity

Test Specs suggestions

By MIME type

By either sync testing or just fetching results with the --breakdown option, it is possible to test by MIME type:

  "median": {
    "firstView": {
      "breakdown": {
        "js": {
          "requests": 6,
          "bytes": 200000
        "css": {
          "requests": 1,
          "bytes": 50000
        "image": {
          "requests": 10,
          "bytes": 300000

The spec above only allows up to 6 JS requests summing up to 200KB, 1 CSS request up to 50KB and no more than 10 images up to 300KB total.

By Processing Breakdown

When sync testing in Chrome with the --timeline option, it is possible to test by Processing Breakdown:

  "run": {
    "firstView": {
      "processing": {
        "RecalculateStyles": 1300,
        "Layout": 2000,
        "Paint": 800

The spec above only allows up to 1300ms of Recalculate Styles, 2000ms of Layout and 800ms of Paint time processing. Thus, avoiding rendering regression once these metrics are know by measuring multiple times from previous tests.

Jenkins Integration

Integration with Jenkins and other CI tools is seamless: Using a sync test command with either --poll or --wait (if Jenkins server is reachable from private instance of WebPageTest server) and specifying --specs file or JSON string with either tap or xunit as --reporter.



* TAP plugin installed from Jenkins Plugin Manager tap configuration


junit configuration



tap report


junit report

Travis-CI Integration

Similarly to Jenkins Integration, Travis-CI also requires sync test command via --poll option since it's very unlikely Travis-CI workers are reachable from private or public instances of WebPageTest servers. --specs obviously is required to test the results but --reporter is not so important because Travis-CI relies on the exit status rather than the output format like Jenkins does.


The following is an example of WebPageTest performance test for a contrived Node project in a Github public repo. Add a test script into package.json file:

  "name": "example",
  "version": "0.0.1",
  "dependencies": {
    "webpagetest": ""
  "scripts": {
    "test": "./node_modules/webpagetest/bin/webpagetest
             test http://staging.example.com
             --server http://webpagetest.example.com
             --key $WPT_API_KEY
             --location MYVM:Chrome
             --timeout 60
             --specs specs.json
             --reporter spec"

* test script's line-breaks added for code clarity, it should be in a single line

The test script above will:

  1. schedule a test on private instance of WebPageTest hosted on http://webpagetest.example.com which must be publicly reachable from Travis-CI workers
  2. use WebPageTest API Key from WPT_API_KEY (environment variable, see Security below)
  3. test http://staging.example.com which must be publicly reachable from WebPageTest agents
  4. run test for first view only
  5. run from location MYVM on Chrome browser
  6. poll results every 5 seconds (default)
  7. timeout in 60 seconds if no results are available
  8. test the results against specs.json spec file
  9. output using the spec reporter


Since tests will be scheduled from public instances of Travis-CI workers WebPageTest API keys (--key or -k) should be used to prevent abuse, but do not put unencrypted API keys in public files. Fortunately Travis-CI provides an easy way to do that via Secure Environment Variables to avoid explicitly passing $WPT_API_KEY in the public .travis.yml file.

Encrypting WebPageTest API Key

Install travis and go to the repo directory:

gem install travis
cd repo_dir

Next, encrypt WebPageTest API Key as a secure environment variable:

travis encrypt WPT_API_KEY=super_secret_api_key_here --add

Note that it must run from the repo directory or use -r or --repo to specify the repo name in the format user/repo, e.g.: marcelduran/webpagetest-api.

By default the --add flag above will append the encrypted string into .travis.yml file as:

    - secure: <encrypted WPT_API_KEY=super_secret_api_key_here string>


travis-ci report

Live sample report


Similar but a bit simpler than Travis-CI Integration, Drone.io has a user friendly interface that allows straight-forward and secure configuration.


drone.io configuration


drone.io configuration

Live sample report