diff --git a/README.md b/README.md index 7ebe3da..ceb893e 100644 --- a/README.md +++ b/README.md @@ -7,29 +7,16 @@ Google PageSpeed score command line toolkit Get a score and metrics via the Google PageSpeed Insights API or a local Lighthouse run. -- [Recommendations for using the score and metrics values](#recommendations-for-using-the-score-and-metrics-values) -- [Requirements](#requirements) -- [Usage](#usage) - * [`--strategy`- mobile or desktop](#--strategy) - * [`--runs`,`--warmup-runs` - multiple runs](#--runs--warmup-runs---multiple-runs) - * [`--local` - local mode](#--local---local-mode) - * [`--benchmark` - output CPU/memory benchmark](#--benchmark---output-cpumemory-benchmark) - * [` --ttfb` - output Time to First Byte](#---ttfb---output-time-to-first-byte) - * [`--usertiming-marks` - output user timing marks](#--usertiming-marks---output-user-timing-marks) - * [`--lantern-debug` - save metrics estimation traces](#--lantern-debug---save-metrics-estimation-traces) -- [Learn more about the score](#learn-more-about-the-score) - * [PageSpeed Insights score = Lighthouse score](#pagespeed-insights-score--lighthouse-score) - * [The 5 metrics that affect the score](#the-5-metrics-that-affect-the-score) - * [Not all metrics are weighted equally](#not-all-metrics-are-weighted-equally) - * [Metrics are estimated with a simulation (Lantern)](#metrics-are-estimated-with-a-simulation-lantern) - + - [Recommendations for using the score and metrics values](#recommendations-for-using-the-score-and-metrics-values) + - [Requirements](#requirements) + - [Usage](#usage) + - [`--strategy` - mobile or desktop](#--strategy---mobile-or-desktop) + - [`--runs` - multiple runs](#--runs---multiple-runs) + - [`--local` - local mode](#--local---local-mode) + - [`LANTERN_DEBUG=true` - save metrics estimation traces](#lantern_debugtrue---save-metrics-estimation-traces) ## Recommendations for using the score and metrics values -* **Use the score to look for longer-term trends and identify big changes**; but prefer your own analytics/field data for finer details -* **Individual metrics marked slow (red) usually highlight genuine problems**, even though actual values are not 100% accurate -* **Reduce variability by doing multiple runs, forcing A/B test variants, and other means** — but even with reduced variability, some inherent inaccuracies remain - Check out my blog post for more details: [What's in the Google PageSpeed score?](https://medium.com/expedia-group-tech/whats-in-the-google-pagespeed-score-a5fc93f91e91) ## Requirements @@ -55,19 +42,12 @@ Use `--help` see the list of all options. `--strategy ` sets the lighthouse strategy (default: mobile) ``` -$ npx pagespeed-score --strategy desktop --runs 3 https://www.google.com +$ npx pagespeed-score --strategy desktop https://www.google.com name score FCP FMP SI FCI TTI run 1 100 0.5 0.5 0.5 0.9 0.9 -run 2 100 0.5 0.5 0.5 0.8 0.9 -run 3 100 0.5 0.5 0.5 0.8 0.9 - -median 100 0.5 0.5 0.5 0.8 0.9 -stddev 0.0 0.0 0.0 0.0 0.1 0.0 -min 100 0.5 0.5 0.5 0.8 0.9 -max 100 0.5 0.5 0.5 0.9 0.9 ``` -### `--runs`,`--warmup-runs` - multiple runs +### `--runs` - multiple runs `--runs ` overrides the number of runs (default: 1). For more than 1 runs stats will be calculated. @@ -84,8 +64,6 @@ min 95 0.9 1.0 1.0 3.1 3.7 max 96 0.9 1.0 1.2 3.5 4.0 ``` -`--warmup-runs ` add warmup runs that are excluded from stats (e.g. to allow CDN or other caches to warm up) - ### `--local` - local mode Switches to running Lighthouse locally instead of calling the PSI API. This can be useful for non-public URLs (e.g. staging environment on a private network) or debugging. To ensure the local results are close to the PSI API results this module: @@ -102,31 +80,14 @@ npx pagespeed-score --local "" Local results will still differ from the PSI API because of local hardware and network variability. -### `--benchmark` - output CPU/memory benchmark - -Adds the benchmark index as a metric for each test run. Lighthouse computes a memory/CPU performance [benchmark index]((https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/lib/page-functions.js#L128-L154)) to determine rough device class. Variability in this can help identifying [Client Hardware Variability](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.km3f9ebrlnmi) or [Client Resource Contention](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.9gqujdsfrbou). These are less likely to occur with PSI that uses a highly controlled lab environment and can affect local Lighthouse runs more. - -### ` --ttfb` - output Time to First Byte - -Adds TTFB as a metric for each test run. Please note that TTFB is not simulated. -### `--usertiming-marks` - output user timing marks +### `LANTERN_DEBUG=true` - save metrics estimation traces -`--usertiming-marks.=` adds any user timing mark named to your metrics with the name `alias` (e.g. `--usertiming-marks.DPA=datepicker.active`). Please note that user timing marks are not simulated. +Setting `LANTERN_DEBUG=true` along with `--save-assets --local` will save traces for how metrics were simulated by Lantern. ``` -$ npx pagespeed-score --usertiming-marks.DPA=datepicker.active https://www.vrbo.com/vacation-rentals/usa -name score FCP FMP SI FCI TTI DPA -run 1 52 2.2 3.1 4.4 10.2 11.3 0.67 -``` - -### `--lantern-debug` - save metrics estimation traces - -`--lantern-debug --save-assets --local` will save traces for how metrics were simulated by Lantern. - -``` -$ npx pagespeed-score \ -> --local --lantern-debug --save-assets https://www.google.com +$ LANTERN_DEBUG=true npx pagespeed-score \ +> --local --save-assets https://www.google.com name score FCP FMP SI FCI TTI run 1 95 1.4 1.4 1.7 3.6 3.8 @@ -149,41 +110,3 @@ $ ls You can drag and drop these traces on the Chrome Devtools Performance tab. See also [lighthouse#5844 Better visualization of Lantern simulation](https://github.com/GoogleChrome/lighthouse/issues/5844). - -## Learn more about the score - -Learn more by reading my blog post: [What's in the Google PageSpeed score?](https://medium.com/expedia-group-tech/whats-in-the-google-pagespeed-score-a5fc93f91e91) - -### PageSpeed Insights score = Lighthouse score - -The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) score is based on [Google Lighthouse](https://developers.google.com/web/tools/lighthouse/) run. - -### The 5 metrics that affect the score - -The [Lighthouse scoring documentation](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md) explains that the performance score is determined using the following estimated metrics: - -| Estimated Metric | Short Description | -|:----------------------------|-------------| -| First Contentful Paint (FCP)| when the first text or image content is painted | -| First Meaningful Paint (FMP)| when the primary content of a page is visible | -| Speed Index (SI) | how quickly the contents of a page are visibly populated | -| First CPU Idle (FCI) | when the main thread first becomes quiet enough to handle input | -| Time to Interactive (TTI) | when the main thread and network is quiet for at least 5s | - -**None of the other Lighthouse audits have a direct impact on the score**, but they do give hints on improving the metrics. To learn more about the metrics, check out my [awesome-web-performance-metrics](https://github.com/csabapalfi/awesome-web-performance-metrics) repo. - -### Not all metrics are weighted equally - -Lighthouse calculates a speed score for all 5 metrics based on their estimated values, then calculates a weighted average to get an aggregate speed score. The metric weights and fast/slow thresholds are available in the table below: - -| Estimated Metric | Weight | Fast | Slow | -|:----------------------------|:------:|:-----:|:-----:| -| First Contentful Paint (FCP)| 3 | <2.4s | >4.0s | -| First Meaningful Paint (FMP)| 1 | <2.4s | >4.0s | -| Speed Index (SI) | 4 | <3.4s | >5.8s | -| First CPU Idle (FCI) | 2 | <3.6s | >6.5s | -| Time to Interactive (TTI) | 5 | <3.8s | >7.3s | - -### Metrics are estimated with a simulation (Lantern) - -[Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md) is the part of Lighthouse that models page activity and simulates browser execution to estimate metrics. diff --git a/index.js b/index.js new file mode 100755 index 0000000..e133016 --- /dev/null +++ b/index.js @@ -0,0 +1,53 @@ +#!/usr/bin/env node + +const {parseArgs} = require('./lib/parse-args'); +const runPagespeed = require('./lib/run-pagespeed'); +const runLighthouse = require('./lib/run-lighthouse'); +const saveFiles = require('./lib/save-files'); +const sampleResult = require('./lib/sample-result'); +const {formatHeading, formatRow} = require('./lib/format'); +const stats = require('./lib/stats'); + +async function* main({url, runs, saveResults, strategy, local}) { + let run = 0; + while (run < runs) { + run++; + + const {result, artifacts} = await ( + local.enabled ? + runLighthouse(local, url) : + runPagespeed(url, strategy) + ); + + await saveFiles(run, saveResults, local, result, artifacts); + + yield sampleResult(run, result); + } + +} + +if (require.main === module) { + const {argv} = process; + const options = parseArgs({argv}); + const samples = []; + + (async () => { + for await (const sample of main(options)) { + samples.push(sample); + + if(samples.length === 1) { + console.log(formatHeading(sample)) + } + + console.log(formatRow(sample)) + } + + if (samples.length > 1) { + ['', ...stats(samples).map(formatRow)] + .forEach(line => console.log(line)); + } + + })(); +} + +module.exports = main; diff --git a/lib/cli-options.js b/lib/cli-options.js deleted file mode 100644 index bbddb96..0000000 --- a/lib/cli-options.js +++ /dev/null @@ -1,120 +0,0 @@ -const yargs = require('yargs'); - -const positionalArgAt = - ({_: [,,...args]}, index) => args[index]; - -function check(argv) { - const url = positionalArgAt(argv, 0); - if (!url) { - throw new Error('URL is required'); - } - if (argv.lanternDebug && !argv.local) { - throw new Error('--lantern-debug only works with --local'); - } - return new URL(url); -} - -const options = { - 'runs': { - type: 'number', - describe: 'Number of runs', - group: 'Runs:', - default: 1, - }, - 'warmup-runs': { - type: 'number', - describe: 'Number of warmup runs', - group: 'Runs:', - default: 0, - }, - 'usertiming-marks': { - describe: 'User Timing marks', - group: 'Additional metrics:', - default: {}, - alias: 'metrics.userTimingMarks' - }, - 'ttfb': { - type: 'boolean', - describe: 'TTFB', - group: 'Additional metrics:', - default: false, - alias: 'metrics.ttfb', - }, - 'benchmark': { - type: 'boolean', - describe: 'Benchmark index', - group: 'Additional metrics:', - default: false, - alias: 'metrics.benchmark', - }, - 'jsonl': { - type: 'boolean', - describe: 'Output as JSON Lines', - group: 'Output:', - default: false, - alias: 'output.jsonl', - }, - 'save-assets': { - type: 'boolean', - describe: 'Save reports and traces', - group: 'Output:', - default: false, - alias: 'output.saveAssets', - }, - 'file-prefix': { - type: 'string', - describe: 'Saved asset file prefix', - group: 'Output:', - default: '', - alias: 'output.filePrefix', - }, - 'lantern-debug': { - type: 'boolean', - describe: 'Save Lantern traces', - group: 'Output:', - default: false, - alias: 'output.lanternDebug', - }, - 'local': { - type: 'boolean', - describe: 'Switch to local Lighthouse', - group: 'Lighthouse:', - alias: 'lighthouse.enabled', - default: false - }, - 'lighthouse-path': { - type: 'string', - describe: 'Lighthouse module path', - group: 'Lighthouse:', - default: 'lighthouse', - alias: 'lighthouse.modulePath', - }, - 'cpu-slowdown': { - type: 'number', - describe: 'CPU slowdown multiplier', - group: 'Lighthouse:', - default: 4, - alias: 'lighthouse.cpuSlowDown', - }, - 'strategy': { - type: 'string', - describe: 'Lighthouse strategy [mobile | desktop]', - group: 'Additional metrics:', - default: 'mobile', - }, -}; - -function parseArgs({argv}) { - const args = yargs(argv) - .usage('$0 ') - .check(check) - .options(options) - .parse(); - - const url = positionalArgAt(args, 0); - const {runs, warmupRuns, metrics, output, lighthouse, strategy} = args; - - return {url, runs, warmupRuns, metrics, output, lighthouse, strategy}; -} - -module.exports = {parseArgs, check}; diff --git a/lib/cli.js b/lib/cli.js deleted file mode 100644 index ece8733..0000000 --- a/lib/cli.js +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/env node - -const main = require('./main.js'); -const {parseArgs} = require('./cli-options.js'); -const {argv} = process; - -(async () => { - for await (const lines of main(parseArgs({argv}))) { - /* eslint-disable-next-line no-console */ - lines.forEach(line => console.log(line)); - } -})(); \ No newline at end of file diff --git a/lib/counter.js b/lib/counter.js deleted file mode 100644 index d51060f..0000000 --- a/lib/counter.js +++ /dev/null @@ -1,26 +0,0 @@ -class Counter { - constructor(runs, warmupRuns) { - this.warmupRuns = warmupRuns; - this.maxRuns = runs + warmupRuns; - this.run = 0; - } - - next() { - const run = ++this.run; - return { - value: { - index: run - this.warmupRuns, - first: run === 1, - warmup: run <= this.warmupRuns, - last: run === this.maxRuns, - }, - done: run > this.maxRuns - }; - } - - [Symbol.iterator]() { - return this; - } -} - -module.exports = {Counter}; \ No newline at end of file diff --git a/lib/fetcher.js b/lib/fetcher.js deleted file mode 100644 index 411d92b..0000000 --- a/lib/fetcher.js +++ /dev/null @@ -1,27 +0,0 @@ -const {runPagespeed} = require('./run-pagespeed'); -const {runLighthouse} = require('./run-lighthouse'); -const {assetSaver} = require('./save-assets'); -const {resultMapper} = require('./map-result'); - -class Fetcher { - constructor(url, metrics, lighthouse, strategy, output) { - this.mapResult = resultMapper(metrics); - this.getLighthouseResult = lighthouse.enabled ? - runLighthouse.bind(null, lighthouse, url) : - runPagespeed.bind(null, url, strategy); - if (output.saveAssets) { - this.saveAssets = assetSaver(lighthouse.modulePath, output); - } - } - - async getResult(index) { - const {result, artifacts} = await this.getLighthouseResult(); - if (this.saveAssets) { - await this.saveAssets(index, result, artifacts); - } - return this.mapResult(index, result); - } -} - -module.exports = {Fetcher}; - diff --git a/lib/format.js b/lib/format.js new file mode 100644 index 0000000..92d6383 --- /dev/null +++ b/lib/format.js @@ -0,0 +1,38 @@ +const {keys, entries} = Object; + +const precision = (digits) => (n) => n.toFixed(digits); + +const formats = { + name: s => s.padEnd(6), + score: precision(0), + score_stddev: precision(1), + FCP: precision(1), + FMP: precision(1), + SI: precision(1), + FCI: precision(1), + TTI: precision(1), + default: precision(2), +}; + +function formatHeading(result) { + return keys(result) + .map((k, index) => index === 0 ? k.padEnd(6) : k) + .join('\t'); +} + +function formatRow(result) { + return entries(result) + .map( + ([k, v]) => ( + formats[`${k}_${result.name}`] || + formats[k] || + formats.default + )(v) + ) + .join('\t'); +} + +module.exports = { + formatHeading, + formatRow +}; diff --git a/lib/formatter.js b/lib/formatter.js deleted file mode 100644 index 1ec6019..0000000 --- a/lib/formatter.js +++ /dev/null @@ -1,70 +0,0 @@ -const {keys, entries} = Object; - -const precision = (digits) => (n) => n.toFixed(digits); - -const skip = ['type']; - -const formats = { - name: s => s.padEnd(6), - score: precision(0), - score_stddev: precision(1), - FCP: precision(1), - FMP: precision(1), - SI: precision(1), - FCI: precision(1), - TTI: precision(1), - TTFB: precision(2), - benchmark: precision(0), - default: precision(2), -}; - -function tableHeading(result) { - return keys(result) - .filter(k => !skip.includes(k)) - .map((k, index) => index === 0 ? k.padEnd(6) : k) - .join('\t'); -} - -function tsvFormatEntry(result) { - return entries(result) - .filter(([k]) => !skip.includes(k)) - .map( - ([k, v]) => ( - formats[`${k}_${result.name}`] || - formats[k] || - formats.default - )(v) - ) - .join('\t'); -} - -function result(entries, {first}) { - return [ - ...(first ? [tableHeading(entries[0])] : []), - ...entries.map(tsvFormatEntry), - ]; -} - -function statistic(entries) { - return ['', ...entries.map(tsvFormatEntry)]; -} - -function tsvFormat(entries, run) { - // same type of entries only in a single call! - const type = entries[0].type; - return type === 'result' ? - result(entries, run) : - statistic(entries); -} - -function jsonlFormat(entries) { - return entries.map(JSON.stringify); -} - -class Formatter { - constructor(jsonl) { - this.format = jsonl ? jsonlFormat : tsvFormat; - } -} - -module.exports = {Formatter, tsvFormat, jsonlFormat}; diff --git a/lib/main.js b/lib/main.js deleted file mode 100755 index a552cc8..0000000 --- a/lib/main.js +++ /dev/null @@ -1,17 +0,0 @@ -const {Counter} = require('./counter'); -const {Fetcher} = require('./fetcher'); -const {Formatter} = require('./formatter'); -const {Runner} = require('./runner'); - -function main({ - runs, warmupRuns, - url, metrics, lighthouse, - output, strategy, -}) { - const counter = new Counter(runs, warmupRuns); - const fetcher = new Fetcher(url, metrics, lighthouse, strategy, output); - const formatter = new Formatter(output.jsonl); - return new Runner(counter, fetcher, formatter); -} - -module.exports = main; diff --git a/lib/map-result.js b/lib/map-result.js deleted file mode 100644 index 90fbe0d..0000000 --- a/lib/map-result.js +++ /dev/null @@ -1,74 +0,0 @@ -const {round} = Math; -const {entries} = Object; - -const DEFAULT_METRICS = { - FCP: 'first-contentful-paint', - FMP: 'first-meaningful-paint', - SI: 'speed-index', - FCI: 'first-cpu-idle', - TTI: 'interactive', -}; - -const TTFB = { - TTFB: 'time-to-first-byte', -}; - -function formatScore(value) { - return parseInt(round(value * 100), 10); -} - -function formatMs(value) { - return parseFloat((parseFloat(value) / 1000).toFixed(2)); -} - -function formatSec(value) { - return parseFloat(parseFloat(value).toFixed(1)); -} - -function getAuditValue(audits, name) { - return audits[name]; -} - -function getMarkValue(audits, name) { - const audit = audits['user-timings']; - return audit.details.items.find(item => item.name === name); -} - -function mapAudits(audits, namesByKey, mapValue, getAudit = getAuditValue) { - const merge = (a, b) => ({...a, ...b}); - return entries(namesByKey) - .filter(([, name]) => getAudit(audits, name)) - .map(([key, name]) => ({ - [key]: mapValue(getAudit(audits, name)) - })) - .reduce(merge, {}); -} - -function mapMetricValue(audit) { - return formatSec(audit.displayValue); -} - -function mapTTFB(audit) { - return formatMs(audit.displayValue.match(/\d+/)[0]); -} - -function mapMarkValue(mark) { - const value = mark ? mark.startTime : NaN; - return formatMs(value); -} - -function resultMapper(metrics) { - const {userTimingMarks, ttfb, benchmark} = metrics; - - return (index, {categories, environment, audits}) => ({ - type: 'result', - name: `run ${index < 1 ? index - 1 : index}`, - score: formatScore(categories.performance.score), - ...mapAudits(audits, DEFAULT_METRICS, mapMetricValue), - ...ttfb && mapAudits(audits, TTFB, mapTTFB), - ...mapAudits(audits, userTimingMarks, mapMarkValue, getMarkValue), - ...benchmark && {benchmark: environment.benchmarkIndex}, - }); -} - -module.exports = {resultMapper, mapMarkValue, mapTTFB}; \ No newline at end of file diff --git a/lib/parse-args.js b/lib/parse-args.js new file mode 100644 index 0000000..a18fadf --- /dev/null +++ b/lib/parse-args.js @@ -0,0 +1,83 @@ +const yargs = require('yargs'); + +const options = { + 'runs': { + type: 'number', + describe: 'Number of runs', + group: 'Basic options:', + default: 1, + }, + 'strategy': { + type: 'string', + describe: 'Strategy (mobile or desktop)', + group: 'Basic options:', + default: 'mobile', + }, + 'save-results': { + type: 'boolean', + describe: 'Save result JSON to disk', + group: 'Basic options:', + default: false, + }, + 'local': { + type: 'boolean', + describe: 'Use local Lighthouse instead of PageSpeed API', + group: 'Basic options:', + default: false + }, + 'cpu-slowdown': { + type: 'number', + describe: 'CPU slowdown multiplier', + group: 'Local only options:', + default: 4, + }, + 'save-assets': { + type: 'boolean', + describe: 'Save the trace & devtools log to disk', + group: 'Local only options:', + default: false, + } +} + +const positionalArgAt = + ({_: [,,...args]}, index) => args[index]; + +function check(argv) { + const url = positionalArgAt(argv, 0); + if (!url) { + throw new Error('URL is required'); + } + + if ( + argv.cpuSlowdown && + argv.cpuSlowdown !== options['cpu-slowdown'].default && + !argv.local + ) { + throw new Error('--cpu-slowdown only works with --local'); + } + + if (argv.saveAssets && !argv.local) { + throw new Error('--save-assets only works with --local'); + } + + return new URL(url); +} + +const buildLocalConfig = + ({local, cpuSlowDown, saveAssets}) => + ({enabled: local, cpuSlowDown, saveAssets}); + +function parseArgs({argv}) { + const args = yargs(argv) + .usage('$0 ') + .check(check) + .options(options) + .parse(); + + const url = positionalArgAt(args, 0); + const {runs, strategy, saveResults} = args; + const local = buildLocalConfig(args); + return {url, runs, strategy, saveResults, local}; +} + +module.exports = {parseArgs, check}; diff --git a/lib/run-lighthouse.js b/lib/run-lighthouse.js index a45a5b3..d92da21 100644 --- a/lib/run-lighthouse.js +++ b/lib/run-lighthouse.js @@ -1,8 +1,9 @@ const {launch} = require('chrome-launcher'); +const lighthouse = require('lighthouse'); -const getConfig = (modulePath) => ({ +const config = { extends: - `${modulePath}/lighthouse-core/config/lr-mobile-config.js`, + `lighthouse/lighthouse-core/config/lr-mobile-config.js`, settings: { onlyAudits: [ 'first-contentful-paint', @@ -10,26 +11,19 @@ const getConfig = (modulePath) => ({ 'speed-index', 'first-cpu-idle', 'interactive', - 'time-to-first-byte', - 'user-timings', ] } -}); +} -async function runLighthouse({modulePath, cpuSlowDown}, url) { - const lighthouse = require(modulePath); +module.exports = async ({cpuSlowDown}, url) => { const throttling = {cpuSlowdownMultiplier: cpuSlowDown}; const chrome = await launch({chromeFlags: ['--headless']}); const options = {port: chrome.port, throttling}; - const config = getConfig(modulePath); const {lhr: result, artifacts} = await lighthouse(url, options, config); await chrome.kill(); return {result, artifacts}; } - -module.exports = {runLighthouse}; - diff --git a/lib/run-pagespeed.js b/lib/run-pagespeed.js index 25710eb..50907c7 100644 --- a/lib/run-pagespeed.js +++ b/lib/run-pagespeed.js @@ -2,7 +2,7 @@ const wreck = require('wreck'); const baseUrl = 'https://www.googleapis.com/pagespeedonline/v5/runPagespeed'; -async function runPagespeed(urlString, strategy) { +module.exports = async (urlString, strategy) => { const url = new URL(urlString); if(!url.hash) { // bust PSI cache but not your CDN's cache @@ -17,5 +17,3 @@ async function runPagespeed(urlString, strategy) { const {payload} = await wreck.get(apiUrl, {json: true}); return {result: payload.lighthouseResult}; } - -module.exports = {runPagespeed}; diff --git a/lib/runner.js b/lib/runner.js deleted file mode 100644 index 3da838e..0000000 --- a/lib/runner.js +++ /dev/null @@ -1,39 +0,0 @@ -const {getStats} = require('./get-stats'); - -class Runner { - constructor(counter, fetcher, formatter) { - this.counter = counter; - this.fetcher = fetcher; - this.formatter = formatter; - this.samples = []; - } - - async next() { - const {counter, fetcher, formatter, samples} = this; - const {value: run, done} = counter.next(); - if (done) { - return Promise.resolve({done}); - } - - const result = await fetcher.getResult(run.index); - - if (!run.warmup) { - samples.push(result); - } - - const output = [...formatter.format([result], run)]; - - if (run.last && samples.length > 1) { - const stats = getStats(samples); - output.push(...formatter.format(stats)); - } - - return Promise.resolve({value: output, done}); - } - - [Symbol.asyncIterator]() { - return this; - } -} - -module.exports = {Runner}; \ No newline at end of file diff --git a/lib/sample-result.js b/lib/sample-result.js new file mode 100644 index 0000000..e6ba817 --- /dev/null +++ b/lib/sample-result.js @@ -0,0 +1,42 @@ +const {round} = Math; +const {entries} = Object; + +const DEFAULT_METRICS = { + FCP: 'first-contentful-paint', + FMP: 'first-meaningful-paint', + SI: 'speed-index', + FCI: 'first-cpu-idle', + TTI: 'interactive', +}; + +function formatScore(value) { + return parseInt(round(value * 100), 10); +} + +function formatSec(value) { + return parseFloat(parseFloat(value).toFixed(1)); +} + +function getAuditValue(audits, name) { + return audits[name]; +} + +function mapAudits(audits, namesByKey, mapValue) { + const merge = (a, b) => ({...a, ...b}); + return entries(namesByKey) + .filter(([, name]) => getAuditValue(audits, name)) + .map(([key, name]) => ({ + [key]: mapValue(getAuditValue(audits, name)) + })) + .reduce(merge, {}); +} + +function mapMetricValue(audit) { + return formatSec(audit.displayValue); +} + +module.exports = (index, {categories, audits}) => ({ + name: `run ${index}`, + score: formatScore(categories.performance.score), + ...mapAudits(audits, DEFAULT_METRICS, mapMetricValue), +}); \ No newline at end of file diff --git a/lib/save-assets.js b/lib/save-assets.js deleted file mode 100644 index 67f395e..0000000 --- a/lib/save-assets.js +++ /dev/null @@ -1,22 +0,0 @@ -const {resolve} = require('path'); -const {writeFileSync} = require('fs'); - -function assetSaver(lighthouseModulePath, output) { - const {saveAssets} = - require(`${lighthouseModulePath}/lighthouse-core/lib/asset-saver`); - - return async (index, result, artifacts) => { - const resolvedPath = resolve(process.cwd(), `${output.filePrefix}${index}`); - const report = JSON.stringify(result, null, 2); - writeFileSync(`${resolvedPath}-0.report.json`, report, 'utf-8'); - - if (artifacts) { - if (output.lanternDebug) { - process.env.LANTERN_DEBUG="true"; - } - await saveAssets(artifacts, result.audits, resolvedPath); - } - } -} - -module.exports = {assetSaver}; \ No newline at end of file diff --git a/lib/save-files.js b/lib/save-files.js new file mode 100644 index 0000000..38a8309 --- /dev/null +++ b/lib/save-files.js @@ -0,0 +1,23 @@ +const {promisify} = require('util'); +const {resolve} = require('path'); +const {writeFile} = require('fs'); + +const {saveAssets} = require('lighthouse/lighthouse-core/lib/asset-saver'); + +async function saveResult(pathWithBaseName, result) { + await promisify(writeFile)( + `${pathWithBaseName}-0.result.json`, + JSON.stringify(result, null, 2), + 'utf-8' + ); +} + +module.exports = async (run, saveResults, local, result, artifacts) => { + const pathWithBaseName = resolve(process.cwd(), `${run}`); + if (saveResults) { + await saveResult(pathWithBaseName, result); + } + if (local.enabled && local.saveAssets) { + await saveAssets(artifacts, result.audits, pathWithBaseName); + } +} \ No newline at end of file diff --git a/lib/get-stats.js b/lib/stats.js similarity index 79% rename from lib/get-stats.js rename to lib/stats.js index fddffa5..a47baad 100644 --- a/lib/get-stats.js +++ b/lib/stats.js @@ -3,7 +3,7 @@ const {median, sampleStandardDeviation, min, max} = const {keys, entries} = Object; -const noStats = ['type', 'name']; +const noStats = ['name']; const statistics = { 'median': median, @@ -12,7 +12,7 @@ const statistics = { 'max': max, } -function getStats(samples) { +module.exports = (samples) => { const merge = (a, b) => ({...a, ...b}); const metricNames = keys(samples[0]) .filter(metricName => !noStats.includes(metricName)); @@ -22,8 +22,6 @@ function getStats(samples) { const metricSamples = samples.map(sample => sample[metricName]); return ({[metricName]: statisticFn(metricSamples)}) }) - .reduce(merge, {type: 'statistic', name: statisticName}) + .reduce(merge, {name: statisticName}) ); -} - -module.exports = {getStats}; \ No newline at end of file +}; \ No newline at end of file diff --git a/package-lock.json b/package-lock.json index 2c5a75a..a39bba3 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1897,16 +1897,6 @@ "resolved": "https://registry.npmjs.org/extsprintf/-/extsprintf-1.3.0.tgz", "integrity": "sha1-lpGEQOMEGnpBT4xS48V06zw+HgU=" }, - "fast-check": { - "version": "1.16.3", - "resolved": "https://registry.npmjs.org/fast-check/-/fast-check-1.16.3.tgz", - "integrity": "sha512-YGrcOXDLKOGZkkeytsJjJX3J7ekm6FPpU5zWvjIthGRIkcaAczjFa97bzy8+aV8wvR0M7gGvYBxCefUvUsgM7Q==", - "dev": true, - "requires": { - "pure-rand": "^1.6.2", - "tslib": "^1.9.3" - } - }, "fast-deep-equal": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-2.0.1.tgz", @@ -5062,12 +5052,6 @@ "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz", "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==" }, - "pure-rand": { - "version": "1.6.2", - "resolved": "https://registry.npmjs.org/pure-rand/-/pure-rand-1.6.2.tgz", - "integrity": "sha512-HNwHOH63m7kCxe0kWEe5jSLwJiL2N83RUUN8POniFuZS+OsbFcMWlvXgxIU2nwKy2zYG2bQan40WBNK4biYPRg==", - "dev": true - }, "qs": { "version": "6.5.2", "resolved": "https://registry.npmjs.org/qs/-/qs-6.5.2.tgz", diff --git a/package.json b/package.json index 3084c2d..0179e05 100644 --- a/package.json +++ b/package.json @@ -3,15 +3,15 @@ "version": "0.20.1", "description": "Google PageSpeed score command line toolkit", "bin": { - "pagespeed-score": "lib/cli.js" + "pagespeed-score": "index.js" }, "scripts": { "pretest": "eslint .", "test": "jest", "coverage": "cat ./coverage/lcov.info | coveralls", - "clean": "rm -f *.trace.json *.devtoolslog.json *.report.json" + "clean": "rm -f *.trace.json *.devtoolslog.json *.result.json" }, - "main": "lib/main.js", + "main": "index.js", "author": "Csaba Palfi", "license": "MIT", "dependencies": { @@ -25,7 +25,6 @@ "coveralls": "^3.0.6", "eslint": "^6.4.0", "eslint-plugin-jest": "^22.17.0", - "fast-check": "^1.16.3", "jest": "^24.9.0" }, "keywords": [ diff --git a/tests/__snapshots__/cli-options.test.js.snap b/tests/__snapshots__/cli-options.test.js.snap deleted file mode 100644 index d47ec8a..0000000 --- a/tests/__snapshots__/cli-options.test.js.snap +++ /dev/null @@ -1,32 +0,0 @@ -// Jest Snapshot v1, https://goo.gl/fbAQLP - -exports[`cli-options parseArgs returns defaults based on argv 1`] = ` -Object { - "lighthouse": Object { - "cpu-slow-down": 4, - "cpuSlowDown": 4, - "enabled": false, - "module-path": "lighthouse", - "modulePath": "lighthouse", - }, - "metrics": Object { - "benchmark": false, - "ttfb": false, - "user-timing-marks": Object {}, - "userTimingMarks": Object {}, - }, - "output": Object { - "file-prefix": "", - "filePrefix": "", - "jsonl": false, - "lantern-debug": false, - "lanternDebug": false, - "save-assets": false, - "saveAssets": false, - }, - "runs": 1, - "strategy": "mobile", - "url": "https://www.google.com", - "warmupRuns": 0, -} -`; diff --git a/tests/__snapshots__/format.test.js.snap b/tests/__snapshots__/format.test.js.snap new file mode 100644 index 0000000..f690764 --- /dev/null +++ b/tests/__snapshots__/format.test.js.snap @@ -0,0 +1,9 @@ +// Jest Snapshot v1, https://goo.gl/fbAQLP + +exports[`format results heading row is formatted as expected 1`] = `"name score FCP FMP SI FCI TTI"`; + +exports[`format results results are formatted as expected 1`] = `"run 1 95 0.9 1.0 1.1 3.2 4.0"`; + +exports[`format stats stats are formatted as expected 1`] = `"median 95 0.9 1.0 1.1 3.2 4.0"`; + +exports[`format stats stddev uses custom precision for score (1 digit) 1`] = `"stddev 95.0 0.9 1.0 1.1 3.2 4.0"`; diff --git a/tests/__snapshots__/formatter.test.js.snap b/tests/__snapshots__/formatter.test.js.snap deleted file mode 100644 index 6c9199b..0000000 --- a/tests/__snapshots__/formatter.test.js.snap +++ /dev/null @@ -1,40 +0,0 @@ -// Jest Snapshot v1, https://goo.gl/fbAQLP - -exports[`formatter jsonl format entries are JSON.stringify-ed 1`] = ` -Array [ - "{\\"type\\":\\"result\\",\\"name\\":\\"run 1\\",\\"score\\":95,\\"FCP\\":0.9,\\"FMP\\":1,\\"SI\\":1.1,\\"FCI\\":3.2,\\"TTI\\":4}", -] -`; - -exports[`formatter tsv format results are formatted as expected 1`] = ` -Array [ - "run 1 95 0.9 1.0 1.1 3.2 4.0", -] -`; - -exports[`formatter tsv format results heading row added before first result 1`] = ` -Array [ - "name score FCP FMP SI FCI TTI", - "run 1 95 0.9 1.0 1.1 3.2 4.0", -] -`; - -exports[`formatter tsv format results userTimingMark uses default number format precision (2 digits) 1`] = ` -Array [ - "run 1 95 0.9 1.0 1.1 3.2 4.0 1.25", -] -`; - -exports[`formatter tsv format stats stats are formatted as expected 1`] = ` -Array [ - "", - "median 95 0.9 1.0 1.1 3.2 4.0", -] -`; - -exports[`formatter tsv format stats stddev uses custom precision for score (1 digit) 1`] = ` -Array [ - "", - "stddev 95.0 0.9 1.0 1.1 3.2 4.0", -] -`; diff --git a/tests/__snapshots__/main.test.js.snap b/tests/__snapshots__/main.test.js.snap deleted file mode 100644 index 8e4e028..0000000 --- a/tests/__snapshots__/main.test.js.snap +++ /dev/null @@ -1,19 +0,0 @@ -// Jest Snapshot v1, https://goo.gl/fbAQLP - -exports[`main returns runner 1`] = ` -Runner { - "counter": Counter { - "maxRuns": 1, - "run": 0, - "warmupRuns": 0, - }, - "fetcher": Fetcher { - "getLighthouseResult": [Function], - "mapResult": [Function], - }, - "formatter": Formatter { - "format": [Function], - }, - "samples": Array [], -} -`; diff --git a/tests/__snapshots__/parse-args.test.js.snap b/tests/__snapshots__/parse-args.test.js.snap new file mode 100644 index 0000000..ef4b1d5 --- /dev/null +++ b/tests/__snapshots__/parse-args.test.js.snap @@ -0,0 +1,15 @@ +// Jest Snapshot v1, https://goo.gl/fbAQLP + +exports[`parse-args parseArgs returns defaults based on argv 1`] = ` +Object { + "local": Object { + "cpuSlowDown": undefined, + "enabled": false, + "saveAssets": false, + }, + "runs": 1, + "saveResults": false, + "strategy": "mobile", + "url": "https://www.google.com", +} +`; diff --git a/tests/__snapshots__/run-lighthouse.test.js.snap b/tests/__snapshots__/run-lighthouse.test.js.snap index bf06db8..7ee43e4 100644 --- a/tests/__snapshots__/run-lighthouse.test.js.snap +++ b/tests/__snapshots__/run-lighthouse.test.js.snap @@ -28,8 +28,6 @@ Array [ "speed-index", "first-cpu-idle", "interactive", - "time-to-first-byte", - "user-timings", ], }, }, diff --git a/tests/__snapshots__/runner.test.js.snap b/tests/__snapshots__/runner.test.js.snap deleted file mode 100644 index 1f50390..0000000 --- a/tests/__snapshots__/runner.test.js.snap +++ /dev/null @@ -1,32 +0,0 @@ -// Jest Snapshot v1, https://goo.gl/fbAQLP - -exports[`runner resolves with result and stats if last run and enough samples 1`] = ` -Object { - "done": false, - "value": Array [ - Object { - "metric": 1, - }, - Object { - "metric": 1.5, - "name": "median", - "type": "statistic", - }, - Object { - "metric": 0.7071067811865476, - "name": "stddev", - "type": "statistic", - }, - Object { - "metric": 1, - "name": "min", - "type": "statistic", - }, - Object { - "metric": 2, - "name": "max", - "type": "statistic", - }, - ], -} -`; diff --git a/tests/__snapshots__/save-assets.test.js.snap b/tests/__snapshots__/save-assets.test.js.snap deleted file mode 100644 index 296884e..0000000 --- a/tests/__snapshots__/save-assets.test.js.snap +++ /dev/null @@ -1,31 +0,0 @@ -// Jest Snapshot v1, https://goo.gl/fbAQLP - -exports[`save-assets saves report and artifacts 1`] = ` -Array [ - "resolvedPath-0.report.json", - "{ - \\"audits\\": {} -}", - "utf-8", -] -`; - -exports[`save-assets saves report and artifacts with LANTERN_DEBUG 1`] = ` -Array [ - "resolvedPath-0.report.json", - "{ - \\"audits\\": {} -}", - "utf-8", -] -`; - -exports[`save-assets saves report only if no artifacts 1`] = ` -Array [ - "resolvedPath-0.report.json", - "{ - \\"audits\\": {} -}", - "utf-8", -] -`; diff --git a/tests/__snapshots__/get-stats.test.js.snap b/tests/__snapshots__/stats.test.js.snap similarity index 87% rename from tests/__snapshots__/get-stats.test.js.snap rename to tests/__snapshots__/stats.test.js.snap index 3776ecf..2f81914 100644 --- a/tests/__snapshots__/get-stats.test.js.snap +++ b/tests/__snapshots__/stats.test.js.snap @@ -10,7 +10,6 @@ Array [ "TTI": 6.5, "name": "median", "score": 85, - "type": "statistic", }, Object { "FCI": 0.7071067811865476, @@ -20,7 +19,6 @@ Array [ "TTI": 0.7071067811865476, "name": "stddev", "score": 7.0710678118654755, - "type": "statistic", }, Object { "FCI": 5, @@ -30,7 +28,6 @@ Array [ "TTI": 6, "name": "min", "score": 80, - "type": "statistic", }, Object { "FCI": 6, @@ -40,7 +37,6 @@ Array [ "TTI": 7, "name": "max", "score": 90, - "type": "statistic", }, ] `; diff --git a/tests/counter.test.js b/tests/counter.test.js deleted file mode 100644 index 6f777b3..0000000 --- a/tests/counter.test.js +++ /dev/null @@ -1,33 +0,0 @@ -const fc = require('fast-check'); -const {Counter} = require('../lib/counter'); - -describe('Counter', () => { - it('returns a single first and last flag in the correct positions', () => { - fc.assert( - fc.property( - fc.integer(1, 100), fc.integer(0, 100), (r, w) => { - const runs = [...new Counter(r, w)]; - expect(runs.filter(({first}) => first)).toHaveLength(1); - expect(runs[0]).toHaveProperty('first', true); - expect(runs.filter(({last}) => last)).toHaveLength(1); - expect(runs[runs.length -1]).toHaveProperty('last', true); - } - ) - ); - }); - - it('returns the configured number of warmup and normal runs', () => { - fc.assert( - fc.property( - fc.integer(1, 100), fc.integer(0, 100), (r, w) => { - const runs = [...new Counter(r, w)]; - expect(runs.map(({warmup}) => warmup)) - .toEqual([ - ...new Array(w).fill(true), - ...new Array(r).fill(false) - ]) - } - ) - ); - }); -}); diff --git a/tests/format.test.js b/tests/format.test.js new file mode 100644 index 0000000..1f99f5b --- /dev/null +++ b/tests/format.test.js @@ -0,0 +1,45 @@ +const {formatRow, formatHeading} = require('../lib/format'); + +describe('format', () => { + describe('results', () => { + const result = { + name: 'run 1', + score: 95, + FCP: 0.9, + FMP: 1, + SI: 1.1, + FCI: 3.2, + TTI: 4 + }; + + it('results are formatted as expected', () => { + expect(formatRow(result)).toMatchSnapshot(); + }); + + it('heading row is formatted as expected', () => { + expect(formatHeading(result)).toMatchSnapshot(); + }); + }); + + describe('stats', () => { + const statistic = { + name: 'median', + score: 95, + FCP: 0.9, + FMP: 1, + SI: 1.1, + FCI: 3.2, + TTI: 4 + }; + + it('stats are formatted as expected', () => { + expect(formatRow(statistic)) + .toMatchSnapshot(); + }); + + it('stddev uses custom precision for score (1 digit)', () => { + expect(formatRow({...statistic, name: 'stddev'})) + .toMatchSnapshot(); + }); + }); +}); \ No newline at end of file diff --git a/tests/formatter.test.js b/tests/formatter.test.js deleted file mode 100644 index cce012b..0000000 --- a/tests/formatter.test.js +++ /dev/null @@ -1,82 +0,0 @@ -const {Formatter, tsvFormat, jsonlFormat} = - require('../lib/formatter'); - -describe('formatter', () => { - describe('tsv format', () => { - describe('results', () => { - const result = { - type: 'result', - name: 'run 1', - score: 95, - FCP: 0.9, - FMP: 1, - SI: 1.1, - FCI: 3.2, - TTI: 4 - }; - - it('are formatted as expected', () => { - expect(tsvFormat([result], {})) - .toMatchSnapshot(); - }); - - it('heading row added before first result', () => { - expect(tsvFormat([result], {first: true})) - .toMatchSnapshot(); - }); - - it('userTimingMark uses default number format precision (2 digits)', () => { - expect(tsvFormat([{...result, DPA: 1.25}], {})) - .toMatchSnapshot(); - }); - }); - - describe('stats', () => { - const statistic = { - type: 'statistic', - name: 'median', - score: 95, - FCP: 0.9, - FMP: 1, - SI: 1.1, - FCI: 3.2, - TTI: 4 - }; - - it('stats are formatted as expected', () => { - expect(tsvFormat([statistic])) - .toMatchSnapshot(); - }); - - it('stddev uses custom precision for score (1 digit)', () => { - expect(tsvFormat([{...statistic, name: 'stddev'}])) - .toMatchSnapshot(); - }); - }); - }); - - describe('jsonl format', () => { - it('entries are JSON.stringify-ed', () => { - expect(jsonlFormat([{ - type: 'result', - name: 'run 1', - score: 95, - FCP: 0.9, - FMP: 1, - SI: 1.1, - FCI: 3.2, - TTI: 4 - }])).toMatchSnapshot(); - }); - }); - - describe('output formatter', () => { - it('return jsonlFormat if jsonl enabled', () => { - expect(new Formatter(true).format).toBe(jsonlFormat); - }); - - it('return tsvFormat if jsonl disabled', () => { - expect(new Formatter(false).format).toBe(tsvFormat); - }); - }); -}); \ No newline at end of file diff --git a/tests/main.test.js b/tests/main.test.js deleted file mode 100644 index ca3c52c..0000000 --- a/tests/main.test.js +++ /dev/null @@ -1,16 +0,0 @@ -const main = require('../lib/main'); - -describe('main', () => { - const options = { - runs: 1, - warmupRuns: 0, - url: 'https://www.google.com', - metrics: {userTimingMarks: {}}, - lighthouse: {enabled: false}, - output: {jsonl: false, saveAssets: false}, - }; - - it('returns runner', () => { - expect(main(options)).toMatchSnapshot(); - }); -}); \ No newline at end of file diff --git a/tests/map-result.test.js b/tests/map-result.test.js deleted file mode 100644 index d76896e..0000000 --- a/tests/map-result.test.js +++ /dev/null @@ -1,25 +0,0 @@ -const {mapMarkValue, mapTTFB} = require('../lib/map-result'); - -describe('map-result', () => { - describe('mapMarkValue', () => { - it('returns NaN if no mark found', () => { - expect(mapMarkValue(null)) - .toBe(NaN); - }); - - it('returns NaN if mark has no startTime', () => { - expect(mapMarkValue({})) - .toBe(NaN); - }); - - it('returns formatted ms value if mark has startTime', () => { - expect(mapMarkValue({startTime: 785})) - .toBe(0.79); - }); - }); - - describe('mapTTFB', () => { - const audit = {displayValue: 'Root document took 240 ms'}; - expect(mapTTFB(audit)).toBe(0.24); - }); -}); \ No newline at end of file diff --git a/tests/cli-options.test.js b/tests/parse-args.test.js similarity index 50% rename from tests/cli-options.test.js rename to tests/parse-args.test.js index e149ce9..d177d00 100644 --- a/tests/cli-options.test.js +++ b/tests/parse-args.test.js @@ -1,6 +1,6 @@ -const {check, parseArgs} = require('../lib/cli-options'); +const {check, parseArgs} = require('../lib/parse-args'); -describe('cli-options', () => { +describe('parse-args', () => { const url = 'https://www.google.com'; const mockArgv = (args = []) => ['node', 'pagespeed-score', ...args]; @@ -20,25 +20,16 @@ describe('cli-options', () => { .toBeTruthy() }); - it('allows if local + lantern-debug', () => { - expect(check({_:mockArgv([url]), lanternDebug: true, local: true})) - .toBeTruthy() - }); - - it('allows if local + !lantern-debug', () => { - expect(check({_:mockArgv([url]), lanternDebug: false, local: true})) - .toBeTruthy() + it('throws if --save-assets used without --local', () => { + expect(() => check({_:mockArgv([url]), saveAssets: true})) + .toThrow('--save-assets only works with --local'); }); - it('allows if !local + !lantern-debug', () => { - expect(check({_:mockArgv([url]), lanternDebug: false, local: false})) - .toBeTruthy() + it('throws if --cpu-slowdown set without --local', () => { + expect(() => check({_:mockArgv([url]), cpuSlowdown: 6})) + .toThrow('--cpu-slowdown only works with --local'); }); - it('throws if !local + lantern-debug', () => { - expect(() => check({_:mockArgv([url]), lanternDebug: true, local: false})) - .toThrow('--lantern-debug only works with --local') - }); }); describe('parseArgs', () => { diff --git a/tests/run-lighthouse.test.js b/tests/run-lighthouse.test.js index 2704a63..f64c44d 100644 --- a/tests/run-lighthouse.test.js +++ b/tests/run-lighthouse.test.js @@ -1,6 +1,6 @@ const {launch} = require('chrome-launcher'); const lighthouse = require('lighthouse'); -const {runLighthouse} = require('../lib/run-lighthouse'); +const runLighthouse = require('../lib/run-lighthouse'); jest.mock('chrome-launcher'); jest.mock('lighthouse'); @@ -10,7 +10,7 @@ describe('run-lighthouse', () => { const url = 'https://www.google.com'; const lighthouseResult = {}; const lighthouseArtifacts = {}; - const options = {modulePath: 'lighthouse', cpuSlowDown: 4}; + const options = {cpuSlowDown: 4}; const chrome = {port: 1234, kill: jest.fn()}; launch.mockResolvedValue(chrome); lighthouse.mockResolvedValue({ diff --git a/tests/run-pagespeed.test.js b/tests/run-pagespeed.test.js index 83b9b35..3223c00 100644 --- a/tests/run-pagespeed.test.js +++ b/tests/run-pagespeed.test.js @@ -1,4 +1,4 @@ -const {runPagespeed} = require('../lib/run-pagespeed'); +const runPagespeed = require('../lib/run-pagespeed'); const wreck = require('wreck'); jest.mock('wreck'); diff --git a/tests/runner.test.js b/tests/runner.test.js deleted file mode 100644 index ef8503c..0000000 --- a/tests/runner.test.js +++ /dev/null @@ -1,46 +0,0 @@ -const {Runner} = require('../lib/runner'); - -describe('runner', () => { - const result = {metric: 1}; - const counter = - ({done = false, ...value} = {}) => ({next: () => ({value, done})}); - const fetcher = { getResult: () => Promise.resolve(result)}; - const formatter = { format: (x) => x }; - - it('returns this as asyncIterator', () => { - const runner = new Runner(); - expect(runner[Symbol.asyncIterator]()).toBe(runner); - }); - - it('resolves with done if counter is done', () => { - const runner = new Runner(counter({done: true})); - return expect(runner.next()) - .resolves.toEqual({done: true}); - }); - - describe('resolves with result ', () => { - const resolvedResult = {value: [result], done: false}; - it('that is added to samples', async () => { - const runner = new Runner(counter(), fetcher, formatter); - await expect(runner.next()).resolves.toEqual(resolvedResult); - expect(runner.samples).toContain(result); - }); - - it('that is not added to samples if warmup result ', async () => { - const runner = new Runner(counter({warmup: true}), fetcher, formatter); - await expect(runner.next()).resolves.toEqual(resolvedResult); - expect(runner.samples).not.toContain(result); - }); - - it('and no stats if last run but not enough samples (<2)', () => { - const runner = new Runner(counter({last: true}), fetcher, formatter); - return expect(runner.next()).resolves.toEqual(resolvedResult); - }); - - it('and stats if last run and enough samples', () => { - const runner = new Runner(counter({last: true}), fetcher, formatter); - runner.samples = [{metric: 2}]; - return expect(runner.next()).resolves.toMatchSnapshot(); - }); - }); -}); \ No newline at end of file diff --git a/tests/save-assets.test.js b/tests/save-assets.test.js deleted file mode 100644 index fde90fa..0000000 --- a/tests/save-assets.test.js +++ /dev/null @@ -1,72 +0,0 @@ -const {resolve} = require('path'); -const {writeFileSync} = require('fs'); -const {saveAssets} = require('lighthouse/lighthouse-core/lib/asset-saver'); -const {assetSaver} = require('../lib/save-assets'); - -jest.mock('path'); -jest.mock('fs'); -jest.mock('lighthouse/lighthouse-core/lib/asset-saver'); - - -describe('save-assets', () => { - const lighthouseModulePath = 'lighthouse'; - const filePrefix = 'prefix-'; - const index = 1; - const audits = {}; - const result = {audits}; - const resolvedPath = 'resolvedPath'; - - afterEach(() => { - jest.resetAllMocks(); - delete process.env.LANTERN_DEBUG; - }); - - function expectToSaveReport() { - expect(resolve).toHaveBeenCalledTimes(1); - expect(resolve.mock.calls[0]) - .toEqual([process.cwd(), `${filePrefix}${index}`]); - - expect(writeFileSync).toHaveBeenCalledTimes(1); - expect(writeFileSync.mock.calls[0]).toMatchSnapshot(); - } - - it('saves report only if no artifacts', async () => { - resolve.mockReturnValue(resolvedPath); - - await assetSaver(lighthouseModulePath, {filePrefix})(index, result); - - expectToSaveReport(); - }); - - it('saves report and artifacts', async () => { - const artifacts = {}; - resolve.mockReturnValue(resolvedPath); - - await assetSaver(lighthouseModulePath, {filePrefix})(1, result, artifacts); - - expectToSaveReport(); - - expect(process.env.LANTERN_DEBUG).toBeUndefined(); - - expect(saveAssets).toHaveBeenCalledTimes(1); - expect(saveAssets.mock.calls[0][0]).toBe(artifacts); - expect(saveAssets.mock.calls[0][1]).toBe(audits); - expect(saveAssets.mock.calls[0][2]).toBe(resolvedPath); - }); - - it('saves report and artifacts with LANTERN_DEBUG', async () => { - const artifacts = {}; - resolve.mockReturnValue(resolvedPath); - const output = {filePrefix, lanternDebug: true}; - await assetSaver(lighthouseModulePath, output)(1, result, artifacts); - - expectToSaveReport(); - - expect(process.env.LANTERN_DEBUG).toBe('true'); - - expect(saveAssets).toHaveBeenCalledTimes(1); - expect(saveAssets.mock.calls[0][0]).toBe(artifacts); - expect(saveAssets.mock.calls[0][1]).toBe(audits); - expect(saveAssets.mock.calls[0][2]).toBe(resolvedPath); - }); -}); \ No newline at end of file diff --git a/tests/get-stats.test.js b/tests/stats.test.js similarity index 71% rename from tests/get-stats.test.js rename to tests/stats.test.js index e290b0a..0435ba0 100644 --- a/tests/get-stats.test.js +++ b/tests/stats.test.js @@ -1,8 +1,8 @@ -const {getStats} = require('../lib/get-stats'); +const stats = require('../lib/stats'); describe('stats', () => { const mockSample = (data) => ({ - type: 'result', name: 'run 1', ...data + name: 'run 1', ...data }); const samples = [ @@ -11,7 +11,7 @@ describe('stats', () => { ] it('returns expected stats', () => { - expect(getStats(samples)) + expect(stats(samples)) .toMatchSnapshot(); }); }); \ No newline at end of file