-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate potential discrepancies in Web Vitals Metrics #950
Comments
Here are the stats for Web Vitals metrics comparison between Google Lighthouse, Grafana Faro and k6 Browser (c959b57): Site: grafana.com
(*) In contrast to Lighthouse and k6 Browser, Grafana Faro CLS metric corresponds to the aggregated values in a time interval measured from the instrumented application code itself, is not based on a single sample and is measured based on real user interaction, therefor I understand this discrepancy in CLS makes sense, as a real user interaction would produce more changes in the page than an automated test that just navigates to the page. Site: test.k6.io
Grafana Faro was not used in this test, as it requires instrumentation in the web application code. Nevertheless, because it uses the same JS library in order to measure Web Vitals metrics as k6 Browser does, so we can assume that for a site as static as test.k6.io the metrics would have matched. See conclusions for a better explanation on this. Conclusions:What we can observe is that on sites that are very static (e.g.: test.k6.io) the Web Vitals metrics measured from the three tools are pretty much the same. The discrepancies are observed when testing more complex and dynamic sites (e.g.: grafana.com). In these cases, values reported from Faro and Lighthouse match more similarly than the ones reported by k6 Browser. If we compare current k6 browser's main HEAD (c959b57) (which includes #949 and #943) with v0.10.0; v0.10.0 version does not report LCP metric consistently. In contrast, c959b57 version reports it (probably due to using Therefor my understanding is that we are still missing Web Vitals metrics, probably due to a race condition between the metrics reporting/parsing/pushing and the iteration end. ANNEXThese are the tests executed for k6, which consist on a single page navigation to the site under test: c959b57 versionimport { browser } from 'k6/x/browser';
export const options = {
scenarios: {
ui: {
executor: 'shared-iterations',
options: {
browser: {
type: 'chromium',
},
},
},
},
};
export default async function () {
const page = browser.newPage();
try {
await page.goto('https://site.under.test.example.com')
} finally {
page.close();
}
} v0.10.0 versionimport { chromium } from 'k6/experimental/browser';
export default async function () {
const browser = chromium.launch();
const page = browser.newPage();
try {
await page.goto('https://site.under.test.example.com')
} finally {
page.close();
browser.close();
}
} |
Closing, as investigation is already done and associated issues (#960) have been created. |
In our recent pull requests, #943 and #949, we have resolved issue #914, which pertained to the inconsistent reporting of Web Vitals metrics. Following these updates, our focus now is to examine and verify if there are any disparities in the measurements provided by our tool compared to other tools.
To ensure the robustness and accuracy of our metrics, we aim to conduct a thorough investigation. This will help us to identify and implement any additional changes or fixes, should they be necessary. We welcome any insights or suggestions on this matter to aid in our investigation.
Possible tools to compare can be Lighthouse, Faro, etc.
The text was updated successfully, but these errors were encountered: