From 2bc67c14b860e5be185cb985b1ab68276b31e2a0 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 00:55:22 +0100 Subject: [PATCH 01/27] add summary --- README.md | 37 ++++++++++++++++++++++++++++++++++++- 1 file changed, 36 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 3519099..362abe1 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,39 @@ -# pagespeed-score +# What's in the Google PageSpeed score? + +[Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). Lighthouse **calculates a speed score based on 5 estimated metrics** and [scores and weights](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md) them like the example below. Values are in seconds and a score of 90-100 is fast, 50-89 is average and 0-49 is slow. + +| Estimated Metric | Weight | +|------------------------------|--------| +| First Contentful Paint (FCP) | 3 | +| First Meaningful Paint (FMP) | 1 | +| Speed Index (SI) | 4 | +| First CPU Idle (FCI) | 2 | +| Time to Interactive (TTI) | 5 | + +**Other audits have no direct impact on the score** (but give hints to improve the metrics). + +**The metrics estimation (code-named [Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md)) models and simulates browser execution.** Lantern can emulate mobile network and CPU execution. To achieve this it only relies on a performance trace observed without any throttling (hence the fast execution time). + +There’s an [accuracy and variability analysis](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#) available. Lantern trades off accuracy but also mitigates certain sources variability. Metrics can be over/underestimated because of: + +* differences in the unthrottled trace vs real device/throttling +* details ignored or simplified to make the simulation workable + +Recommendations for using the score: +* Even if not 100% accurate metrics in the red highlight genuine/urgent problems +* Use the scores to look for longer term trends and bigger changes +* Reduce variability by forcing AB tests, doing multiple runs, etc +* but even reduced variability is not removing inherent inaccuracies +* Use the pagespeed-score cli (this repo/module) to reduce/identify variability and to investigate inaccuracies + + + + + + + + + [![Build Status](https://travis-ci.org/csabapalfi/pagespeed-score.svg?branch=master)](https://travis-ci.org/csabapalfi/pagespeed-score/) [![Coverage Status](https://coveralls.io/repos/github/csabapalfi/pagespeed-score/badge.svg?2)](https://coveralls.io/github/csabapalfi/pagespeed-score) From a561cea548b930263138230182e55f97057b68e1 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 01:08:21 +0100 Subject: [PATCH 02/27] explain metrics inline --- README.md | 36 ++++++++++++++++-------------------- 1 file changed, 16 insertions(+), 20 deletions(-) diff --git a/README.md b/README.md index 362abe1..02bda3a 100644 --- a/README.md +++ b/README.md @@ -1,14 +1,18 @@ # What's in the Google PageSpeed score? + + +## tl;dr + [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). Lighthouse **calculates a speed score based on 5 estimated metrics** and [scores and weights](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md) them like the example below. Values are in seconds and a score of 90-100 is fast, 50-89 is average and 0-49 is slow. -| Estimated Metric | Weight | -|------------------------------|--------| -| First Contentful Paint (FCP) | 3 | -| First Meaningful Paint (FMP) | 1 | -| Speed Index (SI) | 4 | -| First CPU Idle (FCI) | 2 | -| Time to Interactive (TTI) | 5 | +| Estimated Metric | Weight | Description | +|------------------------------|--------|-------------| +| First Contentful Paint (FCP) | 3 | when the first text or image content is painted | +| First Meaningful Paint (FMP) | 1 | when the primary content of a page is visible | +| Speed Index (SI) | 4 | how quickly the contents of a page are visibly populated | +| First CPU Idle (FCI) | 2 | when the main thread is quiet enough to handle user input | +| Time to Interactive (TTI) | 5 | how quickly the main thread and network quiets down for at least 5 seconds | **Other audits have no direct impact on the score** (but give hints to improve the metrics). @@ -20,25 +24,17 @@ There’s an [accuracy and variability analysis](https://docs.google.com/documen * details ignored or simplified to make the simulation workable Recommendations for using the score: -* Even if not 100% accurate metrics in the red highlight genuine/urgent problems -* Use the scores to look for longer term trends and bigger changes +* Even if not 100% accurate **metrics in the red highlight genuine/urgent problems** +* Use the scores to **look for longer term trends and bigger changes** * Reduce variability by forcing AB tests, doing multiple runs, etc * but even reduced variability is not removing inherent inaccuracies * Use the pagespeed-score cli (this repo/module) to reduce/identify variability and to investigate inaccuracies - - - - - - - - [![Build Status](https://travis-ci.org/csabapalfi/pagespeed-score.svg?branch=master)](https://travis-ci.org/csabapalfi/pagespeed-score/) [![Coverage Status](https://coveralls.io/repos/github/csabapalfi/pagespeed-score/badge.svg?2)](https://coveralls.io/github/csabapalfi/pagespeed-score) -Google PageSpeed Insights (PSI) score and metrics CLI +## Google PageSpeed Insights (PSI) score and metrics CLI ``` $ npx pagespeed-score --runs 3 https://www.google.com @@ -53,7 +49,7 @@ min 95 0.9 1.0 1.0 3.1 3.7 max 96 0.9 1.0 1.2 3.5 4.0 ``` -## Metrics +### Metrics * `score` is the PageSpeed score based on [LightHouse perfomance scoring](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md) calculated using FCP, FMP, SI, FCI and TTI. @@ -67,7 +63,7 @@ max 96 0.9 1.0 1.2 3.5 4.0 * `TTI` is [Time to Interactive](https://github.com/csabapalfi/awesome-web-performance-metrics#time-to-interactive-tti) -## Command Line Options +### Command Line Options ``` Runs: From 84df06a1b3a38c93cb285921897adacc985e5158 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 01:09:31 +0100 Subject: [PATCH 03/27] explain metrics inline --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 02bda3a..1947ec8 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ | First Meaningful Paint (FMP) | 1 | when the primary content of a page is visible | | Speed Index (SI) | 4 | how quickly the contents of a page are visibly populated | | First CPU Idle (FCI) | 2 | when the main thread is quiet enough to handle user input | -| Time to Interactive (TTI) | 5 | how quickly the main thread and network quiets down for at least 5 seconds | +| Time to Interactive (TTI) | 5 | how quickly the main thread and network quiets down for at least 5s | **Other audits have no direct impact on the score** (but give hints to improve the metrics). From fea47ffe65747413e815ccd73720a9642f06e32b Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 01:11:03 +0100 Subject: [PATCH 04/27] align table data --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 1947ec8..0f6cf57 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). Lighthouse **calculates a speed score based on 5 estimated metrics** and [scores and weights](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md) them like the example below. Values are in seconds and a score of 90-100 is fast, 50-89 is average and 0-49 is slow. | Estimated Metric | Weight | Description | -|------------------------------|--------|-------------| +|:-----------------------------|:------:|:------------| | First Contentful Paint (FCP) | 3 | when the first text or image content is painted | | First Meaningful Paint (FMP) | 1 | when the primary content of a page is visible | | Speed Index (SI) | 4 | how quickly the contents of a page are visibly populated | From afd3ddce7fbb3ba3844f498a740bfd2e741c63aa Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 01:12:22 +0100 Subject: [PATCH 05/27] better TTI and FCI description --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 0f6cf57..c26c4fb 100644 --- a/README.md +++ b/README.md @@ -11,8 +11,8 @@ | First Contentful Paint (FCP) | 3 | when the first text or image content is painted | | First Meaningful Paint (FMP) | 1 | when the primary content of a page is visible | | Speed Index (SI) | 4 | how quickly the contents of a page are visibly populated | -| First CPU Idle (FCI) | 2 | when the main thread is quiet enough to handle user input | -| Time to Interactive (TTI) | 5 | how quickly the main thread and network quiets down for at least 5s | +| First CPU Idle (FCI) | 2 | when the main thread is first quiet enough to handle user input | +| Time to Interactive (TTI) | 5 | when the main thread and network quiets down for at least 5s | **Other audits have no direct impact on the score** (but give hints to improve the metrics). From f582304aa49e953a296ab92bef8ba4e6c6b24511 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 01:13:12 +0100 Subject: [PATCH 06/27] better TTI description --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index c26c4fb..0974d60 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ | First Meaningful Paint (FMP) | 1 | when the primary content of a page is visible | | Speed Index (SI) | 4 | how quickly the contents of a page are visibly populated | | First CPU Idle (FCI) | 2 | when the main thread is first quiet enough to handle user input | -| Time to Interactive (TTI) | 5 | when the main thread and network quiets down for at least 5s | +| Time to Interactive (TTI) | 5 | when the main thread and network is quiet for at least 5s | **Other audits have no direct impact on the score** (but give hints to improve the metrics). From ac16b408aaa0ea2a48550b960bb7612dc50773a2 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 07:27:02 +0100 Subject: [PATCH 07/27] add 90 and 50 values --- README.md | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 0974d60..9666fec 100644 --- a/README.md +++ b/README.md @@ -1,18 +1,27 @@ # What's in the Google PageSpeed score? +- [Summary](#summary) +- [Google PageSpeed Insights (PSI) score and metrics CLI](#google-pagespeed-insights-psi-score-and-metrics-cli) + * [Metrics](#metrics) + * [Command Line Options](#command-line-options) + * [Local mode](#local-mode) + * [Debugging metrics simulations (Lantern)](#debugging-metrics-simulations-lantern) -## tl;dr -[Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). Lighthouse **calculates a speed score based on 5 estimated metrics** and [scores and weights](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md) them like the example below. Values are in seconds and a score of 90-100 is fast, 50-89 is average and 0-49 is slow. +## Summary -| Estimated Metric | Weight | Description | -|:-----------------------------|:------:|:------------| -| First Contentful Paint (FCP) | 3 | when the first text or image content is painted | -| First Meaningful Paint (FMP) | 1 | when the primary content of a page is visible | -| Speed Index (SI) | 4 | how quickly the contents of a page are visibly populated | -| First CPU Idle (FCI) | 2 | when the main thread is first quiet enough to handle user input | -| Time to Interactive (TTI) | 5 | when the main thread and network is quiet for at least 5s | +The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) speed score is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). + +Lighthouse **calculates a speed score based on 5 estimated metrics**. It [scores and weights](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md) them like the table below. Values are in seconds and a score of 90-100 is fast, 50-89 is average and 0-49 is slow. + +| Estimated Metric | Weight | 90 | 50 | Description | +|:----------------------------|:------:|:----:|:----:|-------------| +| First Contentful Paint (FCP)| 3 | 2.4s | 4.0s | when the first text or image content is painted | +| First Meaningful Paint (FMP)| 1 | 2.4s | 4.0s | when the primary content of a page is visible | +| Speed Index (SI) | 4 | 3.4s | 5.8s | how quickly the contents of a page are visibly populated | +| First CPU Idle (FCI) | 2 | 3.6s | 6.5s | when the main thread is first quiet enough to handle input | +| Time to Interactive (TTI) | 5 | 3.8s | 7.3s | when the main thread and network is quiet for at least 5s | **Other audits have no direct impact on the score** (but give hints to improve the metrics). From 883b4eda594c03848aab73d6fdc15428f62c5c7a Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 08:04:56 +0100 Subject: [PATCH 08/27] update summary --- README.md | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 9666fec..ff05b4f 100644 --- a/README.md +++ b/README.md @@ -11,9 +11,17 @@ ## Summary -The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) speed score is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). +### What is it? -Lighthouse **calculates a speed score based on 5 estimated metrics**. It [scores and weights](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md) them like the table below. Values are in seconds and a score of 90-100 is fast, 50-89 is average and 0-49 is slow. +The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). + +**Lighthouse calculates a speed score on the scale of 0-100 based on 5 estimated metrics.** + +The score of 90-100 is fast, 50-89 is average and 0-49 is slow. + +### What metrics affect the score and how? + +This is available in the [Lighthouse scoring documentation](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md). See a summary of metrics, their weights in the score and their maximum values to achieve the score of 90 and 50 in the table below: | Estimated Metric | Weight | 90 | 50 | Description | |:----------------------------|:------:|:----:|:----:|-------------| From 01361f67303911c20addc90ba94138b7572bf3a1 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 08:43:51 +0100 Subject: [PATCH 09/27] more headings --- README.md | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index ff05b4f..b86cd5c 100644 --- a/README.md +++ b/README.md @@ -1,21 +1,23 @@ # What's in the Google PageSpeed score? -- [Summary](#summary) +- [Overview](#overview) + * [What is the pagespeed score?](#what-is-the-pagespeed-score) + * [What metrics affect the score and how?](#what-metrics-affect-the-score-and-how) + * [How metrics are estimated? Is that accurate?](#how-metrics-are-estimated-is-that-accurate) + * [Recommendations for using the score](#recommendations-for-using-the-score) - [Google PageSpeed Insights (PSI) score and metrics CLI](#google-pagespeed-insights-psi-score-and-metrics-cli) * [Metrics](#metrics) * [Command Line Options](#command-line-options) * [Local mode](#local-mode) * [Debugging metrics simulations (Lantern)](#debugging-metrics-simulations-lantern) +## Overview +### What is the pagespeed score? -## Summary +The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) score is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). -### What is it? - -The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). - -**Lighthouse calculates a speed score on the scale of 0-100 based on 5 estimated metrics.** +**Lighthouse calculates the performance score on the scale of 0-100 based on 5 estimated metrics.** The score of 90-100 is fast, 50-89 is average and 0-49 is slow. @@ -33,26 +35,29 @@ This is available in the [Lighthouse scoring documentation](https://github.com/G **Other audits have no direct impact on the score** (but give hints to improve the metrics). +### How metrics are estimated? Is that accurate? + **The metrics estimation (code-named [Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md)) models and simulates browser execution.** Lantern can emulate mobile network and CPU execution. To achieve this it only relies on a performance trace observed without any throttling (hence the fast execution time). -There’s an [accuracy and variability analysis](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#) available. Lantern trades off accuracy but also mitigates certain sources variability. Metrics can be over/underestimated because of: +There’s an [accuracy and variability analysis](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#) available. Lantern trades off accuracy but also mitigates certain sources variability. +Metrics can be over/underestimated because of: * differences in the unthrottled trace vs real device/throttling * details ignored or simplified to make the simulation workable -Recommendations for using the score: +### Recommendations for using the score + * Even if not 100% accurate **metrics in the red highlight genuine/urgent problems** * Use the scores to **look for longer term trends and bigger changes** * Reduce variability by forcing AB tests, doing multiple runs, etc * but even reduced variability is not removing inherent inaccuracies * Use the pagespeed-score cli (this repo/module) to reduce/identify variability and to investigate inaccuracies +## Google PageSpeed Insights (PSI) score and metrics CLI [![Build Status](https://travis-ci.org/csabapalfi/pagespeed-score.svg?branch=master)](https://travis-ci.org/csabapalfi/pagespeed-score/) [![Coverage Status](https://coveralls.io/repos/github/csabapalfi/pagespeed-score/badge.svg?2)](https://coveralls.io/github/csabapalfi/pagespeed-score) -## Google PageSpeed Insights (PSI) score and metrics CLI - ``` $ npx pagespeed-score --runs 3 https://www.google.com name score FCP FMP SI FCI TTI From 1ea4fe9ec23f9549ca10db639956eacaf732bbf0 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 10:07:16 +0100 Subject: [PATCH 10/27] smal fixes --- README.md | 85 +++++++++++++++++++++++++------------------------------ 1 file changed, 39 insertions(+), 46 deletions(-) diff --git a/README.md b/README.md index b86cd5c..a9fad17 100644 --- a/README.md +++ b/README.md @@ -1,19 +1,19 @@ # What's in the Google PageSpeed score? - [Overview](#overview) - * [What is the pagespeed score?](#what-is-the-pagespeed-score) - * [What metrics affect the score and how?](#what-metrics-affect-the-score-and-how) - * [How metrics are estimated? Is that accurate?](#how-metrics-are-estimated-is-that-accurate) + * [PageSpeed Insights score = Lighthouse](#pagespeed-insights-score--lighthouse) + * [The 5 metrics that affect the score](#the-5-metrics-that-affect-the-score) + * [Metrics estimation: Lantern](#metrics-estimation-lantern) * [Recommendations for using the score](#recommendations-for-using-the-score) -- [Google PageSpeed Insights (PSI) score and metrics CLI](#google-pagespeed-insights-psi-score-and-metrics-cli) - * [Metrics](#metrics) - * [Command Line Options](#command-line-options) +- [How metrics are estimated?](#how-metrics-are-estimated) +- [`pagespeed-score` cli](#pagespeed-score-cli) * [Local mode](#local-mode) - * [Debugging metrics simulations (Lantern)](#debugging-metrics-simulations-lantern) + * [Debugging metrics simulation locally (Lantern)](#debugging-metrics-simulation-locally-lantern) + * [All options](#all-options) ## Overview -### What is the pagespeed score? +### PageSpeed Insights score = Lighthouse The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) score is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). @@ -21,7 +21,7 @@ The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagesp The score of 90-100 is fast, 50-89 is average and 0-49 is slow. -### What metrics affect the score and how? +### The 5 metrics that affect the score This is available in the [Lighthouse scoring documentation](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md). See a summary of metrics, their weights in the score and their maximum values to achieve the score of 90 and 50 in the table below: @@ -35,11 +35,11 @@ This is available in the [Lighthouse scoring documentation](https://github.com/G **Other audits have no direct impact on the score** (but give hints to improve the metrics). -### How metrics are estimated? Is that accurate? +### Metrics estimation: Project Lantern **The metrics estimation (code-named [Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md)) models and simulates browser execution.** Lantern can emulate mobile network and CPU execution. To achieve this it only relies on a performance trace observed without any throttling (hence the fast execution time). -There’s an [accuracy and variability analysis](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#) available. Lantern trades off accuracy but also mitigates certain sources variability. +There’s an [accuracy and variability analysis](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#) available. Lantern trades off accuracy but also mitigates certain sources variability. Metrics can be over/underestimated because of: * differences in the unthrottled trace vs real device/throttling @@ -50,14 +50,18 @@ Metrics can be over/underestimated because of: * Even if not 100% accurate **metrics in the red highlight genuine/urgent problems** * Use the scores to **look for longer term trends and bigger changes** * Reduce variability by forcing AB tests, doing multiple runs, etc -* but even reduced variability is not removing inherent inaccuracies -* Use the pagespeed-score cli (this repo/module) to reduce/identify variability and to investigate inaccuracies +* Even reduced variability is not removing inherent inaccuracies +* Use the `pagespeed-score` cli to reduce/identify variability and to investigate inaccuracies -## Google PageSpeed Insights (PSI) score and metrics CLI +## How metrics are estimated? + +## `pagespeed-score` cli [![Build Status](https://travis-ci.org/csabapalfi/pagespeed-score.svg?branch=master)](https://travis-ci.org/csabapalfi/pagespeed-score/) [![Coverage Status](https://coveralls.io/repos/github/csabapalfi/pagespeed-score/badge.svg?2)](https://coveralls.io/github/csabapalfi/pagespeed-score) +Command line toolkit to get a speed score and metrics via the Google PageSpeed Insights API or a local Lighthouse run. + ``` $ npx pagespeed-score --runs 3 https://www.google.com name score FCP FMP SI FCI TTI @@ -71,23 +75,35 @@ min 95 0.9 1.0 1.0 3.1 3.7 max 96 0.9 1.0 1.2 3.5 4.0 ``` -### Metrics +### Local mode + +`--local` switches to running Lighthouse locally instead of calling the PSI API. This can be useful for non-public URLs (e.g. staging environment on a private network). To ensure the local results are close to the PSI API results this module: + + * uses the same version of LightHouse as PSI + * uses the [LightRider mobile config](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/config/lr-mobile-config.js) + * allows throttling of CPU with `--cpu-slowdown` (default 4x) + +Local results will still differ from the PSI API because of local hardware and network variability. -* `score` is the PageSpeed score based on [LightHouse perfomance scoring](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md) calculated using FCP, FMP, SI, FCI and TTI. +### Debugging metrics simulation locally (Lantern) -* `FCP` is [First Contentful Paint](https://github.com/csabapalfi/awesome-web-performance-metrics#first-contentful-paint-fcp) +`--lantern-debug --save-assets --local` will also save traces for metrics simulations run by Lantern -* `FMP` is [First Meaningful Paint](https://github.com/csabapalfi/awesome-web-performance-metrics#first-meaningful-paint-fmp) +``` +$ npx pagespeed-score \ +--local --lantern-debug --save-assets https://www.google.com +``` -* `SI` is [Speed Index](https://github.com/csabapalfi/awesome-web-performance-metrics#speed-index) +You can open any of these traces in the Chrome Devtools Performance tab. -* `FCI` is [First CPU Idle](https://github.com/csabapalfi/awesome-web-performance-metrics#first-cpu-idle) +See also [lighthouse#5844 Better visualization of Lantern simulation](https://github.com/GoogleChrome/lighthouse/issues/5844). -* `TTI` is [Time to Interactive](https://github.com/csabapalfi/awesome-web-performance-metrics#time-to-interactive-tti) -### Command Line Options +### All options ``` +pagespeed-score + Runs: --runs Number of runs [number] [default: 1] --warmup-runs Number of warmup runs [number] [default: 0] @@ -131,27 +147,4 @@ Lighthouse: * `--jsonl` outputs results (and statistics) as [JSON Lines](http://jsonlines.org/) instead of TSV -* `--save-assets` saves a report for each run - -### Local mode - -`--local` switches to running Lighthouse locally instead of calling the PSI API. This can be useful for non-public URLs (e.g. staging environment on a private network). To ensure the local results are close to the PSI API results this module: - - * uses the same version of LightHouse as PSI - * uses the [LightRider mobile config](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/config/lr-mobile-config.js) - * allows throttling of CPU with `--cpu-slowdown` (default 4x) - -Local results will still differ from the PSI API because of local hardware and network variability. - -### Debugging metrics simulations (Lantern) - -`--lantern-debug --save-assets --local` will also save traces and devtoolslogs and traces for how metrics were simulated by Lantern - -``` -$ npx pagespeed-score \ ---local --lantern-debug --save-assets https://www.google.com -``` - -You can open any of these traces in the Chrome Devtools Performance tab. - -See also [lighthouse#5844 Better visualization of Lantern simulation](https://github.com/GoogleChrome/lighthouse/issues/5844). +* `--save-assets` saves a report for each run \ No newline at end of file From 4889b503dafa196cd6e8c8fb71d80511731b68eb Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 10:11:28 +0100 Subject: [PATCH 11/27] small fixes --- README.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index a9fad17..4d84ed6 100644 --- a/README.md +++ b/README.md @@ -35,7 +35,7 @@ This is available in the [Lighthouse scoring documentation](https://github.com/G **Other audits have no direct impact on the score** (but give hints to improve the metrics). -### Metrics estimation: Project Lantern +### Metrics estimation (project Lantern) **The metrics estimation (code-named [Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md)) models and simulates browser execution.** Lantern can emulate mobile network and CPU execution. To achieve this it only relies on a performance trace observed without any throttling (hence the fast execution time). @@ -53,7 +53,9 @@ Metrics can be over/underestimated because of: * Even reduced variability is not removing inherent inaccuracies * Use the `pagespeed-score` cli to reduce/identify variability and to investigate inaccuracies -## How metrics are estimated? +## How does Lantern estimate metrics? + +TODO ## `pagespeed-score` cli @@ -85,7 +87,7 @@ max 96 0.9 1.0 1.2 3.5 4.0 Local results will still differ from the PSI API because of local hardware and network variability. -### Debugging metrics simulation locally (Lantern) +### Debugging metrics estimation (Lantern) locally `--lantern-debug --save-assets --local` will also save traces for metrics simulations run by Lantern From fc6bdefb0516a1eea28d26bad860c8209105cd00 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 15:54:56 +0100 Subject: [PATCH 12/27] lantern - step 1 --- README.md | 31 ++++++++++++++++++++++------- img/lantern-01-dependency-graph.svg | 1 + 2 files changed, 25 insertions(+), 7 deletions(-) create mode 100644 img/lantern-01-dependency-graph.svg diff --git a/README.md b/README.md index 4d84ed6..3299cb9 100644 --- a/README.md +++ b/README.md @@ -13,7 +13,7 @@ ## Overview -### PageSpeed Insights score = Lighthouse +### PageSpeed Insights score = Lighthouse score The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) score is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). @@ -35,9 +35,13 @@ This is available in the [Lighthouse scoring documentation](https://github.com/G **Other audits have no direct impact on the score** (but give hints to improve the metrics). -### Metrics estimation (project Lantern) +### Metrics are estimated with Lantern -**The metrics estimation (code-named [Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md)) models and simulates browser execution.** Lantern can emulate mobile network and CPU execution. To achieve this it only relies on a performance trace observed without any throttling (hence the fast execution time). +**[Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md) is the part of Lighthouse that estimates metrics.** + +* **Lantern models page activity and simulates browser execution.** +* It can also emulate mobile network and CPU execution based on only a performance trace captured without any throttling. +* (hence the fast execution time). There’s an [accuracy and variability analysis](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#) available. Lantern trades off accuracy but also mitigates certain sources variability. @@ -48,14 +52,27 @@ Metrics can be over/underestimated because of: ### Recommendations for using the score * Even if not 100% accurate **metrics in the red highlight genuine/urgent problems** -* Use the scores to **look for longer term trends and bigger changes** -* Reduce variability by forcing AB tests, doing multiple runs, etc -* Even reduced variability is not removing inherent inaccuracies +* Use the scores to **look for longer term trends and identify big changes** +* Reduce variability by forcing AB test variants, doing multiple runs, etc +* Keep in mind that even with reduced variability some inherent inaccuracies remain * Use the `pagespeed-score` cli to reduce/identify variability and to investigate inaccuracies ## How does Lantern estimate metrics? -TODO +Lantern is an ongoing effort to reduce the run time of Lighthouse and improve audit quality by modeling page activity and simulating browser execution. Metrics are estimated based on: + +* capturing an unthrottled network and CPU trace (usually referred to as observed trace) +* simulating browser execution (with emulated mobile conditions) using relevant parts of the trace + +See detailed breakdown of steps below. + +### 1. Create a page dependency graph from the observed (unthrottled) trace +* Lighthouse loads the page without any throttling +* A dependency graph is built based on the network records and the CPU trace +* Any CPU tasks and network requests related to each other are linked up +* See [lighthouse-core/computed/page-dependency-graph.js](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/computed/page-dependency-graph.js) + +![lantern - step 1 - dependency graph](img/lantern-01-dependency-graph.svg) ## `pagespeed-score` cli diff --git a/img/lantern-01-dependency-graph.svg b/img/lantern-01-dependency-graph.svg new file mode 100644 index 0000000..4c71e2b --- /dev/null +++ b/img/lantern-01-dependency-graph.svg @@ -0,0 +1 @@ + \ No newline at end of file From 64d57e03de2b703c17c2f6a9197ef1e9bb5c78ec Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 16:35:15 +0100 Subject: [PATCH 13/27] lantern - step 1 --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 3299cb9..4f66b7e 100644 --- a/README.md +++ b/README.md @@ -72,7 +72,9 @@ See detailed breakdown of steps below. * Any CPU tasks and network requests related to each other are linked up * See [lighthouse-core/computed/page-dependency-graph.js](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/computed/page-dependency-graph.js) -![lantern - step 1 - dependency graph](img/lantern-01-dependency-graph.svg) +> ![lantern - step 1 - dependency graph](img/lantern-01-dependency-graph.svg) + +(via [Project Lantern Overview - slide 7](patrickhulce) by @patrickhulce) ## `pagespeed-score` cli From 565522be8e72a120ac6565c28046172529ec52e7 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 16:45:38 +0100 Subject: [PATCH 14/27] fix link to lantern slide --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4f66b7e..1163ac1 100644 --- a/README.md +++ b/README.md @@ -74,7 +74,7 @@ See detailed breakdown of steps below. > ![lantern - step 1 - dependency graph](img/lantern-01-dependency-graph.svg) -(via [Project Lantern Overview - slide 7](patrickhulce) by @patrickhulce) +(via [Project Lantern Overview - slide 7](https://docs.google.com/presentation/d/1EsuNICCm6uhrR2PLNaI5hNkJ-q-8Mv592kwHmnf4c6U/edit?zx=ksqkx77n311n#slide=id.g2ab7b9a053_0_467) by [@patrickhulce](https://github.com/patrickhulce)) ## `pagespeed-score` cli From d38e9123696a80de1f3d910066772666adf73eb3 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 16:57:30 +0100 Subject: [PATCH 15/27] re-order + ToC refresh --- README.md | 50 ++++++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/README.md b/README.md index 1163ac1..1a8f6f6 100644 --- a/README.md +++ b/README.md @@ -1,15 +1,16 @@ # What's in the Google PageSpeed score? - [Overview](#overview) - * [PageSpeed Insights score = Lighthouse](#pagespeed-insights-score--lighthouse) + * [PageSpeed Insights score = Lighthouse score](#pagespeed-insights-score--lighthouse-score) * [The 5 metrics that affect the score](#the-5-metrics-that-affect-the-score) - * [Metrics estimation: Lantern](#metrics-estimation-lantern) + * [Metrics are estimated with Lantern](#metrics-are-estimated-with-lantern) * [Recommendations for using the score](#recommendations-for-using-the-score) -- [How metrics are estimated?](#how-metrics-are-estimated) - [`pagespeed-score` cli](#pagespeed-score-cli) * [Local mode](#local-mode) - * [Debugging metrics simulation locally (Lantern)](#debugging-metrics-simulation-locally-lantern) + * [Debugging metrics estimation (Lantern) locally](#debugging-metrics-estimation-lantern-locally) * [All options](#all-options) +- [How does Lantern estimate metrics?](#how-does-lantern-estimate-metrics) + * [1. Create a page dependency graph from the observed (unthrottled) trace](#1-create-a-page-dependency-graph-from-the-observed-unthrottled-trace) ## Overview @@ -57,25 +58,6 @@ Metrics can be over/underestimated because of: * Keep in mind that even with reduced variability some inherent inaccuracies remain * Use the `pagespeed-score` cli to reduce/identify variability and to investigate inaccuracies -## How does Lantern estimate metrics? - -Lantern is an ongoing effort to reduce the run time of Lighthouse and improve audit quality by modeling page activity and simulating browser execution. Metrics are estimated based on: - -* capturing an unthrottled network and CPU trace (usually referred to as observed trace) -* simulating browser execution (with emulated mobile conditions) using relevant parts of the trace - -See detailed breakdown of steps below. - -### 1. Create a page dependency graph from the observed (unthrottled) trace -* Lighthouse loads the page without any throttling -* A dependency graph is built based on the network records and the CPU trace -* Any CPU tasks and network requests related to each other are linked up -* See [lighthouse-core/computed/page-dependency-graph.js](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/computed/page-dependency-graph.js) - -> ![lantern - step 1 - dependency graph](img/lantern-01-dependency-graph.svg) - -(via [Project Lantern Overview - slide 7](https://docs.google.com/presentation/d/1EsuNICCm6uhrR2PLNaI5hNkJ-q-8Mv592kwHmnf4c6U/edit?zx=ksqkx77n311n#slide=id.g2ab7b9a053_0_467) by [@patrickhulce](https://github.com/patrickhulce)) - ## `pagespeed-score` cli [![Build Status](https://travis-ci.org/csabapalfi/pagespeed-score.svg?branch=master)](https://travis-ci.org/csabapalfi/pagespeed-score/) @@ -168,4 +150,24 @@ Lighthouse: * `--jsonl` outputs results (and statistics) as [JSON Lines](http://jsonlines.org/) instead of TSV -* `--save-assets` saves a report for each run \ No newline at end of file +* `--save-assets` saves a report for each run + +## How does Lantern estimate metrics? + +Lantern is an ongoing effort to reduce the run time of Lighthouse and improve audit quality by modeling page activity and simulating browser execution. Metrics are estimated based on: + +* capturing an unthrottled network and CPU trace (usually referred to as observed trace) +* simulating browser execution (with emulated mobile conditions) using relevant parts of the trace + +See detailed breakdown of steps below. + +### 1. Create a page dependency graph + +* Lighthouse loads the page without any throttling +* A dependency graph is built based on the network records and the CPU trace +* Any CPU tasks and network requests related to each other are linked up +* See [lighthouse-core/computed/page-dependency-graph.js](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/computed/page-dependency-graph.js) + +> ![lantern - step 1 - dependency graph](img/lantern-01-dependency-graph.svg) + +(via [Project Lantern Overview - slide 7](https://docs.google.com/presentation/d/1EsuNICCm6uhrR2PLNaI5hNkJ-q-8Mv592kwHmnf4c6U/edit?zx=ksqkx77n311n#slide=id.g2ab7b9a053_0_467) by [@patrickhulce](https://github.com/patrickhulce)) \ No newline at end of file From 7ba8b7e46fad9bc1148bce7fa50e9f21f9097366 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 17:17:55 +0100 Subject: [PATCH 16/27] formatting --- README.md | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index 1a8f6f6..980155a 100644 --- a/README.md +++ b/README.md @@ -16,11 +16,7 @@ ### PageSpeed Insights score = Lighthouse score -The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) score is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). - -**Lighthouse calculates the performance score on the scale of 0-100 based on 5 estimated metrics.** - -The score of 90-100 is fast, 50-89 is average and 0-49 is slow. +The [Google PageSpeed Insights (PSI)](https://developers.google.com/speed/pagespeed/insights/) score is based on [Google Lighthouse (LH)](https://developers.google.com/web/tools/lighthouse/). **Lighthouse calculates the performance score on the scale of 0-100 based on 5 estimated metrics.** The score of 90-100 is fast, 50-89 is average and 0-49 is slow. ### The 5 metrics that affect the score @@ -38,11 +34,7 @@ This is available in the [Lighthouse scoring documentation](https://github.com/G ### Metrics are estimated with Lantern -**[Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md) is the part of Lighthouse that estimates metrics.** - -* **Lantern models page activity and simulates browser execution.** -* It can also emulate mobile network and CPU execution based on only a performance trace captured without any throttling. -* (hence the fast execution time). +**[Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md) is the part of Lighthouse that estimates metrics. Lantern models page activity and simulates browser execution.** It can also emulate mobile network and CPU execution based on only a performance trace captured without any throttling (hence the fast execution time). There’s an [accuracy and variability analysis](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#) available. Lantern trades off accuracy but also mitigates certain sources variability. From f712f23decfdfec13a3749a7daca8b55753d4442 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 18:39:31 +0100 Subject: [PATCH 17/27] more content --- README.md | 111 +++++++++++++++++++++++++++++++++--------------------- 1 file changed, 69 insertions(+), 42 deletions(-) diff --git a/README.md b/README.md index 980155a..24a44ec 100644 --- a/README.md +++ b/README.md @@ -8,9 +8,20 @@ - [`pagespeed-score` cli](#pagespeed-score-cli) * [Local mode](#local-mode) * [Debugging metrics estimation (Lantern) locally](#debugging-metrics-estimation-lantern-locally) - * [All options](#all-options) +- [Identifying inaccuracies](#identifying-inaccuracies) + * [Debug metrics estimation locally](#debug-metrics-estimation-locally) +- [Reducing variability](#reducing-variability) + * [Multiple runs](#multiple-runs) + * [Force AB tests variants](#force-ab-tests-variants) + * [Feature flags to turn off e.g. third party scripts](#feature-flags-to-turn-off-eg-third-party-scripts) +- [Identifying sources of variability](#identifying-sources-of-variability) + * [Benchmark Index](#benchmark-index) + * [Time to First Byte](#time-to-first-byte) + * [User Timing marks and measures](#user-timing-marks-and-measures) - [How does Lantern estimate metrics?](#how-does-lantern-estimate-metrics) - * [1. Create a page dependency graph from the observed (unthrottled) trace](#1-create-a-page-dependency-graph-from-the-observed-unthrottled-trace) + * [1. Create a page dependency graph](#1-create-a-page-dependency-graph) + * [2. Create subgraph for each metric](#2-create-subgraph-for-each-metric) + * [3. Simulate subgraph with emulated mobile conditions](#3-simulate-subgraph-with-emulated-mobile-conditions) ## Overview @@ -70,6 +81,16 @@ min 95 0.9 1.0 1.0 3.1 3.7 max 96 0.9 1.0 1.2 3.5 4.0 ``` +* `--help` see the list of all options + +* `--runs ` overrides the number of runs (default: 1). For more than 1 runs stats will be calculated. + +* `--warmup-runs ` add warmup runs that are excluded from stats (e.g. to allow CDN or other caches to warm up) + +* `--jsonl` outputs results (and statistics) as [JSON Lines](http://jsonlines.org/) instead of TSV + +* `--save-assets` saves a report for each run + ### Local mode `--local` switches to running Lighthouse locally instead of calling the PSI API. This can be useful for non-public URLs (e.g. staging environment on a private network). To ensure the local results are close to the PSI API results this module: @@ -93,56 +114,58 @@ You can open any of these traces in the Chrome Devtools Performance tab. See also [lighthouse#5844 Better visualization of Lantern simulation](https://github.com/GoogleChrome/lighthouse/issues/5844). +## Identifying inaccuracies -### All options - +### Debug metrics estimation locally +See lighthouse#5844. In short run lighthouse cli with the following options: +```sh +LANTERN_DEBUG=true npx lighthouse ``` -pagespeed-score - -Runs: - --runs Number of runs [number] [default: 1] - --warmup-runs Number of warmup runs [number] [default: 0] - -Additional metrics: - --usertiming-marks, User Timing marks - --metrics.userTimingMarks [default: {}] - --ttfb, --metrics.ttfb TTFB [boolean] [default: false] - --benchmark, --metrics.benchmark Benchmark index - [boolean] [default: false] - -Output: - --jsonl, --output.jsonl Output as JSON Lines - [boolean] [default: false] - --save-assets, --output.saveAssets Save reports and traces - [boolean] [default: false] - --file-prefix, --output.filePrefix Saved asset file prefix - [string] [default: ""] - --lantern-debug, --output.lanternDebug Save Lantern traces - [boolean] [default: false] - -Lighthouse: - --local, --lighthouse.enabled Switch to local Lighthouse - [boolean] [default: false] - --lighthouse-path, Lighthouse module path - --lighthouse.modulePath [string] [default: "lighthouse"] - --cpu-slowdown, --lighthouse.cpuSlowDown CPU slowdown multiplier - [number] [default: 4] +You can also use the `pagespeed-score` node module to ensure you’re inline with PSI: +* Lighthouse version (5.0.0 as of 9 May 2019) +* Lighthouse config (lr-mobile-config.js) +* same Chrome version (75 as of 9 May 2019) by specifying CHROME_PATH + +```sh +CHROME_PATH="/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary" \ +npx pagespeed-score --local --save-assets --lantern-debug "" ``` -* `--runs ` overrides the number of runs (default: 1). For more than 1 runs stats will be calculated. +The pagespeed-score module also allows debugging a custom Lighthouse version using the --lighthouse-path option (i.e. to test/debug Lantern code changes or upcoming versions). -* `--warmup-runs ` add warmup runs that are excluded from stats (e.g. to allow CDN or other caches to warm up) +## Reducing variability -* `--usertiming-marks.=` adds any User Timing mark named to your metrics with the name `alias` (e.g. `--usertiming-marks.DPA=datepicker.active`) +### Multiple runs -* `--ttfb` adds [Time to First Byte](https://developers.google.com/web/tools/lighthouse/audits/ttfb) to your metrics - can help identifying if a run was affected by your server response time variability +Test multiple times and take the median (or more/better statistics) of the score to reduce the impact of outliers (independent of what’s causing this variability). Use the pagespeed-score cli: +`npx pagespeed-score --runs 9 ""` -* `--benchmark` adds the Lighthouse CPU/memory power [benchmarkIndex](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/lib/page-functions.js#L128-L154) to your metrics - can help identifying if a run was affected by Google server-side variability or resource contention +### Force AB tests variants -* `--jsonl` outputs results (and statistics) as [JSON Lines](http://jsonlines.org/) instead of TSV +By making sure we always test the same variants of any AB tests running on the page we can ensure they don’t introduce Page Nondeterminism. -* `--save-assets` saves a report for each run +### Feature flags to turn off e.g. third party scripts + +Sometimes variability is introduced by third party scripts or certain features on the page. As a last resort adding a flag to turn these off can help getting a more stable score. Ensure not to exclusively rely on the score and metrics captured like this as real users will still experience your page with all of these ‘features’ on. + + +## Identifying sources of variability + +The pagespeed-score cli has a number of options to output additional data not directly taken into account for score calculation but can help in identifying various sources of variability. E.g. +`npx pagespeed-score --benchmark --ttfb --usertiming-mark.= ""` + +### Benchmark Index + +Lighthouse computes a memory/CPU performance benchmark index to determine rough device class. Variability in this can help identifying Client Hardware Variability or Client Resource Contention. These are less likely to occur with PSI that uses a highly controlled lab environment but can affect local Lighthouse runs more. + +### Time to First Byte + +Time to First Byte (TTFB) has a very limited impact on the score but can be useful indicator of Web Server Variability. Please note that TTFB is not estimated by Lantern but based on the observed/fast trace. + +### User Timing marks and measures + +We use a number of User Timing marks and high variability in these can mean you have Page Nondeterminism or other sources variability. Please note these are not estimated by Lantern but based on the observed/fast trace. ## How does Lantern estimate metrics? @@ -162,4 +185,8 @@ See detailed breakdown of steps below. > ![lantern - step 1 - dependency graph](img/lantern-01-dependency-graph.svg) -(via [Project Lantern Overview - slide 7](https://docs.google.com/presentation/d/1EsuNICCm6uhrR2PLNaI5hNkJ-q-8Mv592kwHmnf4c6U/edit?zx=ksqkx77n311n#slide=id.g2ab7b9a053_0_467) by [@patrickhulce](https://github.com/patrickhulce)) \ No newline at end of file +(via [Project Lantern Overview - slide 7](https://docs.google.com/presentation/d/1EsuNICCm6uhrR2PLNaI5hNkJ-q-8Mv592kwHmnf4c6U/edit?zx=ksqkx77n311n#slide=id.g2ab7b9a053_0_467) by [@patrickhulce](https://github.com/patrickhulce)) + +### 2. Create subgraph for each metric + +### 3. Simulate subgraph with emulated mobile conditions \ No newline at end of file From 21c20daa0104476d4b16b02efa6b26e200099d23 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 22:04:16 +0100 Subject: [PATCH 18/27] document all lantern steps --- README.md | 23 ++++++++++++++++--- ...aph.svg => lantern-1-dependency-graph.svg} | 0 img/lantern-2-create-subgraphs.svg | 1 + img/lantern-3-simulate-subgraphs.svg | 1 + 4 files changed, 22 insertions(+), 3 deletions(-) rename img/{lantern-01-dependency-graph.svg => lantern-1-dependency-graph.svg} (100%) create mode 100644 img/lantern-2-create-subgraphs.svg create mode 100644 img/lantern-3-simulate-subgraphs.svg diff --git a/README.md b/README.md index 24a44ec..3ab699a 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ - [How does Lantern estimate metrics?](#how-does-lantern-estimate-metrics) * [1. Create a page dependency graph](#1-create-a-page-dependency-graph) * [2. Create subgraph for each metric](#2-create-subgraph-for-each-metric) - * [3. Simulate subgraph with emulated mobile conditions](#3-simulate-subgraph-with-emulated-mobile-conditions) + * [3. Simulate subgraphs with emulated mobile conditions](#3-simulate-subgraph-with-emulated-mobile-conditions) ## Overview @@ -183,10 +183,27 @@ See detailed breakdown of steps below. * Any CPU tasks and network requests related to each other are linked up * See [lighthouse-core/computed/page-dependency-graph.js](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/computed/page-dependency-graph.js) -> ![lantern - step 1 - dependency graph](img/lantern-01-dependency-graph.svg) +> ![lantern - step 1 - dependency graph](img/lantern-1-dependency-graph.svg) (via [Project Lantern Overview - slide 7](https://docs.google.com/presentation/d/1EsuNICCm6uhrR2PLNaI5hNkJ-q-8Mv592kwHmnf4c6U/edit?zx=ksqkx77n311n#slide=id.g2ab7b9a053_0_467) by [@patrickhulce](https://github.com/patrickhulce)) ### 2. Create subgraph for each metric -### 3. Simulate subgraph with emulated mobile conditions \ No newline at end of file +* CPU and network nodes are filtered to create a subgraph with only the nodes contributing to the delay of a specific metric +* e.g. based on the comparing node end timestamps with observed (unthrottled) metric timestamps +* See [lighthouse-core/computed/metrics/lantern-*](https://github.com/GoogleChrome/lighthouse/tree/master/lighthouse-core/computed/metrics) + +> ![lantern - step 2 - create subgraphs](img/lantern-2-create-subgraphs.svg) + +(via [Project Lantern Overview - slide 8](https://docs.google.com/presentation/d/1EsuNICCm6uhrR2PLNaI5hNkJ-q-8Mv592kwHmnf4c6U/edit?zx=ksqkx77n311n#slide=id.g2ab7b9a053_0_503) by [@patrickhulce](https://github.com/patrickhulce)) + +### 3. Simulate subgraph with emulated mobile conditions + +* Simulate browser execution for each metric subgraph +* DNS caching, TCP slow start, Connection pooling, and lots more implemented... +* See [lighthouse-core/lib/dependency-graph/simulator/simulator.js](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/lib/dependency-graph/simulator/simulator.js) + +> ![lantern - step 3 - simulate subgraphs](img/lantern-3-simulate-subgraphs.svg) + +(via [Project Lantern Overview - slide 9](https://docs.google.com/presentation/d/1EsuNICCm6uhrR2PLNaI5hNkJ-q-8Mv592kwHmnf4c6U/edit?zx=ksqkx77n311n#slide=id.g2ab7b9a053_0_845) by [@patrickhulce](https://github.com/patrickhulce)) + diff --git a/img/lantern-01-dependency-graph.svg b/img/lantern-1-dependency-graph.svg similarity index 100% rename from img/lantern-01-dependency-graph.svg rename to img/lantern-1-dependency-graph.svg diff --git a/img/lantern-2-create-subgraphs.svg b/img/lantern-2-create-subgraphs.svg new file mode 100644 index 0000000..c31fa60 --- /dev/null +++ b/img/lantern-2-create-subgraphs.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/img/lantern-3-simulate-subgraphs.svg b/img/lantern-3-simulate-subgraphs.svg new file mode 100644 index 0000000..8a662fd --- /dev/null +++ b/img/lantern-3-simulate-subgraphs.svg @@ -0,0 +1 @@ + \ No newline at end of file From 00600a0fe8df43b8761c5d025ca3ad97d3940025 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 22:12:07 +0100 Subject: [PATCH 19/27] formatting --- README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/README.md b/README.md index 3ab699a..6a6babf 100644 --- a/README.md +++ b/README.md @@ -47,9 +47,7 @@ This is available in the [Lighthouse scoring documentation](https://github.com/G **[Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md) is the part of Lighthouse that estimates metrics. Lantern models page activity and simulates browser execution.** It can also emulate mobile network and CPU execution based on only a performance trace captured without any throttling (hence the fast execution time). -There’s an [accuracy and variability analysis](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#) available. Lantern trades off accuracy but also mitigates certain sources variability. - -Metrics can be over/underestimated because of: +There’s an [accuracy and variability analysis](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#) available. Lantern trades off accuracy but also mitigates certain sources variability. Metrics can be over/underestimated because of: * differences in the unthrottled trace vs real device/throttling * details ignored or simplified to make the simulation workable From cf691387ae9937913871386204b352de0c4478d6 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 22:20:47 +0100 Subject: [PATCH 20/27] add links for metrics --- README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 6a6babf..1d86c6d 100644 --- a/README.md +++ b/README.md @@ -35,11 +35,11 @@ This is available in the [Lighthouse scoring documentation](https://github.com/G | Estimated Metric | Weight | 90 | 50 | Description | |:----------------------------|:------:|:----:|:----:|-------------| -| First Contentful Paint (FCP)| 3 | 2.4s | 4.0s | when the first text or image content is painted | -| First Meaningful Paint (FMP)| 1 | 2.4s | 4.0s | when the primary content of a page is visible | -| Speed Index (SI) | 4 | 3.4s | 5.8s | how quickly the contents of a page are visibly populated | -| First CPU Idle (FCI) | 2 | 3.6s | 6.5s | when the main thread is first quiet enough to handle input | -| Time to Interactive (TTI) | 5 | 3.8s | 7.3s | when the main thread and network is quiet for at least 5s | +| [First Contentful Paint (FCP)](https://github.com/csabapalfi/awesome-web-performance-metrics#first-contentful-paint-fcp) | 3 | 2.4s | 4.0s | when the first text or image content is painted | +| [First Meaningful Paint (FMP)](https://github.com/csabapalfi/awesome-web-performance-metrics#first-meaningful-paint-fmp) | 1 | 2.4s | 4.0s | when the primary content of a page is visible | +| [Speed Index (SI)](https://github.com/csabapalfi/awesome-web-performance-metrics#speed-index) | 4 | 3.4s | 5.8s | how quickly the contents of a page are visibly populated | +| [First CPU Idle (FCI)](https://github.com/csabapalfi/awesome-web-performance-metrics#first-cpu-idle) | 2 | 3.6s | 6.5s | when the main thread is first quiet enough to handle input | +| [Time to Interactive (TTI)](https://github.com/csabapalfi/awesome-web-performance-metrics#time-to-interactive-tti) | 5 | 3.8s | 7.3s | when the main thread and network is quiet for at least 5s | **Other audits have no direct impact on the score** (but give hints to improve the metrics). From dd730f81f646d750322c17d1748ba9e44fd0d622 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 22:22:35 +0100 Subject: [PATCH 21/27] rewording --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 1d86c6d..2e12904 100644 --- a/README.md +++ b/README.md @@ -45,9 +45,10 @@ This is available in the [Lighthouse scoring documentation](https://github.com/G ### Metrics are estimated with Lantern -**[Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md) is the part of Lighthouse that estimates metrics. Lantern models page activity and simulates browser execution.** It can also emulate mobile network and CPU execution based on only a performance trace captured without any throttling (hence the fast execution time). +**[Lantern](https://github.com/GoogleChrome/lighthouse/blob/master/docs/lantern.md) is the part of Lighthouse that estimates metrics. Lantern models page activity and simulates browser execution.** It can also emulate mobile network and CPU execution. The input data for the simulation is a performance trace captured without any throttling (hence the fast execution time). There’s an [accuracy and variability analysis](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#) available. Lantern trades off accuracy but also mitigates certain sources variability. Metrics can be over/underestimated because of: + * differences in the unthrottled trace vs real device/throttling * details ignored or simplified to make the simulation workable From 99b10b5602aff4f234a2ea1dc00739eb91276a61 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 23:41:21 +0100 Subject: [PATCH 22/27] add back identifying inaccuracies --- README.md | 141 +++++++++++++++++++++++++++++++----------------------- 1 file changed, 80 insertions(+), 61 deletions(-) diff --git a/README.md b/README.md index 2e12904..763ce3a 100644 --- a/README.md +++ b/README.md @@ -7,9 +7,6 @@ * [Recommendations for using the score](#recommendations-for-using-the-score) - [`pagespeed-score` cli](#pagespeed-score-cli) * [Local mode](#local-mode) - * [Debugging metrics estimation (Lantern) locally](#debugging-metrics-estimation-lantern-locally) -- [Identifying inaccuracies](#identifying-inaccuracies) - * [Debug metrics estimation locally](#debug-metrics-estimation-locally) - [Reducing variability](#reducing-variability) * [Multiple runs](#multiple-runs) * [Force AB tests variants](#force-ab-tests-variants) @@ -18,10 +15,12 @@ * [Benchmark Index](#benchmark-index) * [Time to First Byte](#time-to-first-byte) * [User Timing marks and measures](#user-timing-marks-and-measures) +- [Identifying inaccuracies](#identifying-inaccuracies) + * [Debug Lantern metrics estimation locally](#debug-lantern-metrics-estimation-locally) - [How does Lantern estimate metrics?](#how-does-lantern-estimate-metrics) * [1. Create a page dependency graph](#1-create-a-page-dependency-graph) * [2. Create subgraph for each metric](#2-create-subgraph-for-each-metric) - * [3. Simulate subgraphs with emulated mobile conditions](#3-simulate-subgraph-with-emulated-mobile-conditions) + * [3. Simulate subgraph with emulated mobile conditions](#3-simulate-subgraph-with-emulated-mobile-conditions) ## Overview @@ -62,11 +61,47 @@ There’s an [accuracy and variability analysis](https://docs.google.com/documen ## `pagespeed-score` cli -[![Build Status](https://travis-ci.org/csabapalfi/pagespeed-score.svg?branch=master)](https://travis-ci.org/csabapalfi/pagespeed-score/) -[![Coverage Status](https://coveralls.io/repos/github/csabapalfi/pagespeed-score/badge.svg?2)](https://coveralls.io/github/csabapalfi/pagespeed-score) - Command line toolkit to get a speed score and metrics via the Google PageSpeed Insights API or a local Lighthouse run. +``` +$ npx pagespeed-score https://www.google.com +name score FCP FMP SI FCI TTI +run 1 96 1.2 1.2 1.2 3.3 3.7 +``` + +Use `--help` see the list of all options: + +```shell +$ npx pagespeed-score --help +# soo many options it won't fit here +``` + +### Local mode + +`--local` switches to running Lighthouse locally instead of calling the PSI API. This can be useful for non-public URLs (e.g. staging environment on a private network). To ensure the local results are close to the PSI API results this module: + + * uses the same version of LightHouse as PSI (5.0.0 as of 9 May 2019) + * uses the [LightRider mobile config](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/config/lr-mobile-config.js) + * allows throttling of CPU with `--cpu-slowdown` (default 4x). Please note that PSI infrastructure already runs on a slower CPU (that's like a mobile device) hence the need to slow our laptops CPU down for local runs. + * you can also use the same Chrome version as PSI (75 as of 9 May 2019) by specifying CHROME_PATH + +```sh +CHROME_PATH="/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary" \ +npx pagespeed-score --local "" +``` + +Local results will still differ from the PSI API because of local hardware and network variability. + +## Reducing variability + +### Multiple runs + +Test multiple times and take the median (or more/better statistics) of the score to reduce the impact of outliers (independent of what’s causing this variability). + +You can use the `pagespeed-score` cli: + +* `--runs ` overrides the number of runs (default: 1). For more than 1 runs stats will be calculated. + ``` $ npx pagespeed-score --runs 3 https://www.google.com name score FCP FMP SI FCI TTI @@ -80,91 +115,75 @@ min 95 0.9 1.0 1.0 3.1 3.7 max 96 0.9 1.0 1.2 3.5 4.0 ``` -* `--help` see the list of all options - -* `--runs ` overrides the number of runs (default: 1). For more than 1 runs stats will be calculated. - * `--warmup-runs ` add warmup runs that are excluded from stats (e.g. to allow CDN or other caches to warm up) -* `--jsonl` outputs results (and statistics) as [JSON Lines](http://jsonlines.org/) instead of TSV +### Force AB tests variants -* `--save-assets` saves a report for each run +By making sure we always test the same variants of any AB tests running on the page we can ensure they don’t introduce [Page Nondeterminism](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.js7k0ib0mzzv). -### Local mode +### Feature flags to turn off e.g. third party scripts -`--local` switches to running Lighthouse locally instead of calling the PSI API. This can be useful for non-public URLs (e.g. staging environment on a private network). To ensure the local results are close to the PSI API results this module: +Sometimes variability is introduced by third party scripts or certain features on the page. As a last resort adding a flag to turn these off can help getting a more stable score. Ensure not to exclusively rely on the score and metrics captured like this as real users will still experience your page with all of these ‘features’ on. + + +## Identifying sources of variability - * uses the same version of LightHouse as PSI - * uses the [LightRider mobile config](https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/config/lr-mobile-config.js) - * allows throttling of CPU with `--cpu-slowdown` (default 4x) +You can look at additional datapoints not directly taken into account for score calculation that can help in identifying sources of variability. -Local results will still differ from the PSI API because of local hardware and network variability. +### Benchmark Index -### Debugging metrics estimation (Lantern) locally +Lighthouse computes a memory/CPU performance [benchmark index]((https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/lib/page-functions.js#L128-L154)) to determine rough device class. -`--lantern-debug --save-assets --local` will also save traces for metrics simulations run by Lantern +Variability in this can help identifying [Client Hardware Variability](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.km3f9ebrlnmi) or [Client Resource Contention](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.9gqujdsfrbou). -``` -$ npx pagespeed-score \ ---local --lantern-debug --save-assets https://www.google.com -``` +These are less likely to occur with PSI that uses a highly controlled lab environment and can affect local Lighthouse runs more. -You can open any of these traces in the Chrome Devtools Performance tab. +You can use the `pagespeed-score` cli to monitor this: -See also [lighthouse#5844 Better visualization of Lantern simulation](https://github.com/GoogleChrome/lighthouse/issues/5844). +* ` --benchmark` adds the benchmark index as a metric for each test run -## Identifying inaccuracies +### Time to First Byte -### Debug metrics estimation locally -See lighthouse#5844. In short run lighthouse cli with the following options: -```sh -LANTERN_DEBUG=true npx lighthouse -``` +Time to First Byte (TTFB) has a very limited impact on the score but can be useful indicator of [Web Server Variability](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.6rnl1clafpqn). -You can also use the `pagespeed-score` node module to ensure you’re inline with PSI: -* Lighthouse version (5.0.0 as of 9 May 2019) -* Lighthouse config (lr-mobile-config.js) -* same Chrome version (75 as of 9 May 2019) by specifying CHROME_PATH +Please note that TTFB is not estimated by Lantern but based on the observed/fast trace. -```sh -CHROME_PATH="/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary" \ -npx pagespeed-score --local --save-assets --lantern-debug "" -``` +You can use the `pagespeed-score` cli to monitor this: -The pagespeed-score module also allows debugging a custom Lighthouse version using the --lighthouse-path option (i.e. to test/debug Lantern code changes or upcoming versions). +* ` --ttfb` adds TTFB as a metric for each test run -## Reducing variability +### User Timing marks and measures -### Multiple runs +We use a number of User Timing marks and high variability in these can mean you have [Page Nondeterminism](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.js7k0ib0mzzv) or other sources variability. -Test multiple times and take the median (or more/better statistics) of the score to reduce the impact of outliers (independent of what’s causing this variability). Use the pagespeed-score cli: -`npx pagespeed-score --runs 9 ""` +Please note user timing marks are not estimated by Lantern but based on the observed/fast trace. -### Force AB tests variants +You can use the `pagespeed-score` cli to monitor them: -By making sure we always test the same variants of any AB tests running on the page we can ensure they don’t introduce Page Nondeterminism. +* `--usertiming-marks.=` adds any User Timing mark named to your metrics with the name `alias` (e.g. `--usertiming-marks.DPA=datepicker.active`) -### Feature flags to turn off e.g. third party scripts +## Identifying inaccuracies -Sometimes variability is introduced by third party scripts or certain features on the page. As a last resort adding a flag to turn these off can help getting a more stable score. Ensure not to exclusively rely on the score and metrics captured like this as real users will still experience your page with all of these ‘features’ on. - - -## Identifying sources of variability +### Debug Lantern metrics estimation locally -The pagespeed-score cli has a number of options to output additional data not directly taken into account for score calculation but can help in identifying various sources of variability. E.g. -`npx pagespeed-score --benchmark --ttfb --usertiming-mark.= ""` +Read [How does Lantern estimate metrics?](#how-does-lantern-estimate-metrics) first to have a better understanding of the details. -### Benchmark Index +In case you want to understand why Lantern estimated a metric the way it did you can make Lighthouse save the traces resulting from the simulations: -Lighthouse computes a memory/CPU performance benchmark index to determine rough device class. Variability in this can help identifying Client Hardware Variability or Client Resource Contention. These are less likely to occur with PSI that uses a highly controlled lab environment but can affect local Lighthouse runs more. +```sh +LANTERN_DEBUG=true npx lighthouse --save-assets +``` -### Time to First Byte +Use the Chrome Devtools Performance tab to open the traces. -Time to First Byte (TTFB) has a very limited impact on the score but can be useful indicator of Web Server Variability. Please note that TTFB is not estimated by Lantern but based on the observed/fast trace. +Subscribe to [lighthouse#5844](https://github.com/GoogleChrome/lighthouse/issues/5844) for future updates on this. -### User Timing marks and measures +You can also use the `pagespeed-score` cli in [local mode](#local-mode) that has builtin support for this and also ensures that your lighthouse setup is as close to PSI as possible: -We use a number of User Timing marks and high variability in these can mean you have Page Nondeterminism or other sources variability. Please note these are not estimated by Lantern but based on the observed/fast trace. +```sh +CHROME_PATH="/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary" \ +npx pagespeed-score --local --save-assets --lantern-debug "" +``` ## How does Lantern estimate metrics? From 02b425615c677e1111db85fb6acac809c5b07f51 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 23:45:28 +0100 Subject: [PATCH 23/27] formatting --- README.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 763ce3a..78b20f9 100644 --- a/README.md +++ b/README.md @@ -61,7 +61,7 @@ There’s an [accuracy and variability analysis](https://docs.google.com/documen ## `pagespeed-score` cli -Command line toolkit to get a speed score and metrics via the Google PageSpeed Insights API or a local Lighthouse run. +`pagespeed-score` is a module contained in this repe. It's a command line toolkit to get a speed score and metrics via the Google PageSpeed Insights API or a local Lighthouse run. ``` $ npx pagespeed-score https://www.google.com @@ -132,11 +132,7 @@ You can look at additional datapoints not directly taken into account for score ### Benchmark Index -Lighthouse computes a memory/CPU performance [benchmark index]((https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/lib/page-functions.js#L128-L154)) to determine rough device class. - -Variability in this can help identifying [Client Hardware Variability](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.km3f9ebrlnmi) or [Client Resource Contention](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.9gqujdsfrbou). - -These are less likely to occur with PSI that uses a highly controlled lab environment and can affect local Lighthouse runs more. +Lighthouse computes a memory/CPU performance [benchmark index]((https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/lib/page-functions.js#L128-L154)) to determine rough device class. Variability in this can help identifying [Client Hardware Variability](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.km3f9ebrlnmi) or [Client Resource Contention](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.9gqujdsfrbou). These are less likely to occur with PSI that uses a highly controlled lab environment and can affect local Lighthouse runs more. You can use the `pagespeed-score` cli to monitor this: From 317d8ae44bcd1e31aaf3dd566cc3c1ca46a2ca6f Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 23:47:29 +0100 Subject: [PATCH 24/27] formatting --- README.md | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 78b20f9..6398a55 100644 --- a/README.md +++ b/README.md @@ -140,9 +140,7 @@ You can use the `pagespeed-score` cli to monitor this: ### Time to First Byte -Time to First Byte (TTFB) has a very limited impact on the score but can be useful indicator of [Web Server Variability](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.6rnl1clafpqn). - -Please note that TTFB is not estimated by Lantern but based on the observed/fast trace. +Time to First Byte (TTFB) has a very limited impact on the score but can be useful indicator of [Web Server Variability](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.6rnl1clafpqn). Please note that TTFB is not estimated by Lantern but based on the observed/fast trace. You can use the `pagespeed-score` cli to monitor this: @@ -150,9 +148,7 @@ You can use the `pagespeed-score` cli to monitor this: ### User Timing marks and measures -We use a number of User Timing marks and high variability in these can mean you have [Page Nondeterminism](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.js7k0ib0mzzv) or other sources variability. - -Please note user timing marks are not estimated by Lantern but based on the observed/fast trace. +We use a number of User Timing marks and high variability in these can mean you have [Page Nondeterminism](https://docs.google.com/document/d/1BqtL-nG53rxWOI5RO0pItSRPowZVnYJ_gBEQCJ5EeUE/edit#heading=h.js7k0ib0mzzv) or other sources variability. Please note user timing marks are not estimated by Lantern but based on the observed/fast trace. You can use the `pagespeed-score` cli to monitor them: @@ -162,9 +158,7 @@ You can use the `pagespeed-score` cli to monitor them: ### Debug Lantern metrics estimation locally -Read [How does Lantern estimate metrics?](#how-does-lantern-estimate-metrics) first to have a better understanding of the details. - -In case you want to understand why Lantern estimated a metric the way it did you can make Lighthouse save the traces resulting from the simulations: +Read [how does Lantern estimate metrics](#how-does-lantern-estimate-metrics) first to have a better understanding of the high level approach. In case you want to understand why Lantern estimated a metric the way it did you can make Lighthouse save the traces resulting from the simulations: ```sh LANTERN_DEBUG=true npx lighthouse --save-assets From e47973a44c529926c51348e921502f5f940be184 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 23:48:40 +0100 Subject: [PATCH 25/27] formatting --- README.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 6398a55..1335144 100644 --- a/README.md +++ b/README.md @@ -164,11 +164,9 @@ Read [how does Lantern estimate metrics](#how-does-lantern-estimate-metrics) fir LANTERN_DEBUG=true npx lighthouse --save-assets ``` -Use the Chrome Devtools Performance tab to open the traces. +Use the Chrome Devtools Performance tab to open the traces. Subscribe to [lighthouse#5844](https://github.com/GoogleChrome/lighthouse/issues/5844) for future updates on this. -Subscribe to [lighthouse#5844](https://github.com/GoogleChrome/lighthouse/issues/5844) for future updates on this. - -You can also use the `pagespeed-score` cli in [local mode](#local-mode) that has builtin support for this and also ensures that your lighthouse setup is as close to PSI as possible: +You can also use `pagespeed-score` in [local mode](#local-mode) that has builtin support for this and also ensures that your lighthouse setup is as close to PSI as possible: ```sh CHROME_PATH="/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary" \ From 4e88830da7dc979cb10fd889f6e6b3e3a3142dbc Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 23:52:27 +0100 Subject: [PATCH 26/27] teaser --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 1335144..662b753 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,7 @@ # What's in the Google PageSpeed score? +Ever wondered how your Google PageSpeed score is calculated and how to use it? This document (and node module) tries to answer that. + - [Overview](#overview) * [PageSpeed Insights score = Lighthouse score](#pagespeed-insights-score--lighthouse-score) * [The 5 metrics that affect the score](#the-5-metrics-that-affect-the-score) From 05b9e74f6387863c041ba31fa48464ace0cf4e43 Mon Sep 17 00:00:00 2001 From: Csaba Palfi Date: Thu, 6 Jun 2019 23:54:33 +0100 Subject: [PATCH 27/27] formatting --- README.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 662b753..8d32c8a 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,8 @@ # What's in the Google PageSpeed score? -Ever wondered how your Google PageSpeed score is calculated and how to use it? This document (and node module) tries to answer that. +Ever wondered how your Google PageSpeed score is calculated and how to use it? + +This document (and node module) tries to answer that. - [Overview](#overview) * [PageSpeed Insights score = Lighthouse score](#pagespeed-insights-score--lighthouse-score) @@ -61,9 +63,9 @@ There’s an [accuracy and variability analysis](https://docs.google.com/documen * Keep in mind that even with reduced variability some inherent inaccuracies remain * Use the `pagespeed-score` cli to reduce/identify variability and to investigate inaccuracies -## `pagespeed-score` cli +## The `pagespeed-score` module -`pagespeed-score` is a module contained in this repe. It's a command line toolkit to get a speed score and metrics via the Google PageSpeed Insights API or a local Lighthouse run. +`pagespeed-score` is a node module published from this repo. It's a command line toolkit to get a speed score and metrics via the Google PageSpeed Insights API or a local Lighthouse run. ``` $ npx pagespeed-score https://www.google.com