Skip to content

Latest commit

 

History

History
239 lines (173 loc) · 9.38 KB

readme.md

File metadata and controls

239 lines (173 loc) · 9.38 KB

lighthouse

Stops you crashing into the rocks; lights the way

image

image

Build Status Coverage Status

status: prototype extension and CLI available for testing

Install Chrome extension

Requires Chrome version 52+

chrome.google.com/webstore/detail/lighthouse/blipmdconlkpinefehnmjammfjpmpbjk

Install CLI

Requires Node v5+ or Node v4 w/ --harmony

npm install -g GoogleChrome/lighthouse

Run

# Start Chrome with a few flags
npm explore -g lighthouse -- npm run chrome

# Kick off a lighthouse run
lighthouse https://airhorner.com/

# see flags and options
lighthouse --help

Develop

Setup

git clone https://github.com/GoogleChrome/lighthouse
cd lighthouse

# will be cleaner soon.
cd lighthouse-core
npm install

Custom run configuration

You can supply your own run configuration to customize what audits you want details on. Copy the default.json and start customizing. Then provide to the CLI with lighthouse --config-path=$PWD/myconfig.json <url>

Trace processing

Lighthouse can be used to analyze trace and performance data collected from other tools (like WebPageTest and ChromeDriver). The traces and performanceLog artifact items can be provided using a string for the absolute path on disk. The perf log is captured from the Network domain (a la ChromeDriver's enableNetwork option and reformatted slightly. As an example, here's a trace-only run that's reporting on user timings and critical request chains:

config.json
{
  "audits": [
    "user-timings",
    "critical-request-chains"
  ],

  "artifacts": {
    "traces": {
      "defaultPass": "/User/me/lighthouse/lighthouse-core/test/fixtures/traces/trace-user-timings.json"
    },
    "performanceLog": "/User/me/lighthouse/lighthouse-core/test/fixtures/traces/perflog.json"
  },

  "aggregations": [{
    "name": "Performance Metrics",
    "description": "These encapsulate your app's performance.",
    "scored": false,
    "categorizable": false,
    "items": [{
      "criteria": {
        "user-timings": { "rawValue": 0, "weight": 1 },
        "critical-request-chains": { "rawValue": 0, "weight": 1}
      }
    }]
  }]
}

Then, run with: lighthouse --config-path=$PWD/config.json http://www.random.url

Lighthouse CLI options

$ lighthouse --help

lighthouse <url>

Logging:
  --verbose  Displays verbose logging                                                 [boolean]
  --quiet    Displays no progress or debug logs                                       [boolean]

Configuration:
  --mobile                 Emulates a Nexus 5X                                  [default: true]
  --load-page              Loads the page                                       [default: true]
  --save-assets            Save the trace contents & screenshots to disk              [boolean]
  --save-artifacts         Save all gathered artifacts to disk                        [boolean]
  --audit-whitelist        Comma separated list of audits to run               [default: "all"]
  --list-all-audits        Prints a list of all available audits and exits            [boolean]
  --list-trace-categories  Prints a list of all required trace categories and exits   [boolean]
  --config-path            The absolute path to the config JSON.

Output:
  --output       Reporter for the results
                         [choices: "pretty", "json", "html"]                [default: "pretty"]
  --output-path  The file path to output the results
                 Example: --output-path=./lighthouse-results.html           [default: "stdout"]

Options:
  --help     Show help                                                                [boolean]
  --version  Show version number                                                      [boolean]

Tests

Some basic unit tests forked are in /test and run via mocha. eslint is also checked for style violations.

# lint and test all files
npm test

# watch for file changes and run tests
#   Requires http://entrproject.org : brew install entr
npm run watch

## run linting and unit tests seprately
npm run lint
npm run unit

Chrome Extension

The same audits are run against from a Chrome extension. See ./extension.

Architecture

Some incomplete notes

Components

  • Driver - Interfaces with Chrome Debugging Protocol (API viewer)
  • Gathers - Requesting data from the browser (and maybe post-processing)
  • Artifacts - The output of gatherers
  • Audits - Non-performance evaluations of capabilities and issues. Includes a raw value and score of that value.
  • Metrics - Performance metrics summarizing the UX
  • Diagnoses - The perf problems that affect those metrics
  • Aggregators - Pulling audit results, grouping into user-facing components (eg. install_to_homescreen) and applying weighting and overall scoring.
Internal module graph

graph of lighthouse-core module dependencies npm install -g js-vd; vd --exclude "node_modules|third_party" lighthouse-core/ > graph.html

Protocol

  • Interacting with Chrome: The Chrome protocol connection maintained via chrome-remote-interface for the CLI and chrome.debuggger API when in the Chrome extension.
  • Event binding & domains: Some domains must be enable()d so they issue events. Once enabled, they flush any events that represent state. As such, network events will only issue after the domain is enabled. All the protocol agents resolve their Domain.enable() callback after they have flushed any pending events. See example:
// will NOT work
driver.sendCommand('Security.enable').then(_ => {
	driver.on('Security.securityStateChanged', state => { /* ... */ });
})

// WILL work! happy happy. :)
driver.on('Security.securityStateChanged', state => { /* ... */ }); // event binding is synchronous
driver.sendCommand('Security.enable');

Gatherers

  • Reading the DOM: We prefer reading the DOM right from the browser (See #77). The driver exposes a querySelector method that can be used along with a getAttribute method to read values.

Audits

The return value of each audit takes this shape:

Promise.resolve({
  name: 'audit-name',
  tags: ['what have you'],
  description: 'whatnot',
  // value: The score. Typically a boolean, but can be number 0-100
  value: 0,
  // rawValue: Could be anything, as long as it can easily be stringified and displayed,
  //   e.g. 'your score is bad because you wrote ${rawValue}'
  rawValue: {},
  // debugString: Some *specific* error string for helping the user figure out why they failed here.
  //   The reporter can handle *general* feedback on how to fix, e.g. links to the docs
  debugString: 'Your manifest 404ed'
  // fault:  Optional argument when the audit doesn't cover whatever it is you're doing,
  //   e.g. we can't parse your particular corner case out of a trace yet.
  //   Whatever is in `rawValue` and `score` would be N/A in these cases
  fault: 'some reason the audit has failed you, Anakin'
});

Code Style

The .eslintrc defines all.

We're using JSDoc along with closure annotations. Annotations encouraged for all contributions.

const > let > var. Use const wherever possible. Save var for emergencies only.

Trace processing

The traceviewer-based trace processor from node-big-rig was forked into Lighthouse. Additionally, the DevTools' Timeline Model is available as well. There may be advantages for using one model over another.

To update traceviewer source:

cd lighthouse-core
# if not already there, clone catapult and copy license over
git clone --depth=1 https://github.com/catapult-project/catapult.git third_party/src/catapult
cp third_party/src/catapult/LICENSE third_party/traceviewer-js/
# pull for latest
git -C "./third_party/src/catapult/" pull
# run our conversion script
node scripts/build-traceviewer-module.js