Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout for CLS & LCP monitoring #100

Closed
smhmic opened this issue Nov 23, 2020 · 5 comments
Closed

Timeout for CLS & LCP monitoring #100

smhmic opened this issue Nov 23, 2020 · 5 comments

Comments

@smhmic
Copy link

smhmic commented Nov 23, 2020

Why not track both CLS & LCP (once) shortly after page load?  Sure, you may not capture post-load changes, but how when do content changes (that are not initiated by user input) ever occur long after page load?  You could report the metric and stop listening after some reasonable period (somewhere in the range of 10-30 secs) after page load or dom ready.  Then the inconsistencies around lifecycle state would become a secondary concern.

For CLS, I see why monitoring for the entire lifecycle of the page is useful, but it's so much more useful when focused on layout shifts during page load.  This library offers the hadRecentInput parameter which can be already be leveraged in the tracking/reporting logic to isolate, but perhaps this warrants it's own metric?

For LCP, is tracking for the entire lifecycle of the page useful at all?  I'm not familiar with any UI paradigms requiring optimization that would benefit from this, and can cause heavily inflated metrics for infinite scrollers.

A change like this would also address reporting issues where metrics tracked long after pageload cause (i.e. in GA).

@philipwalton
Copy link
Member

Why not track both CLS & LCP (once) shortly after page load? Sure, you may not capture post-load changes, but how when do content changes (that are not initiated by user input) ever occur long after page load?

Lots of sites wait until the load event to load additional content, as a way to game performance tools that just measure load times. To account for this, LCP continues monitoring for larger elements until user interaction or until the tab is backgrounded or unloaded. While you probably could wait until 10-30 seconds after load, you'd still have to account for user interaction as well as the page being background or unloaded, so you don't really gain anything from that timeout (other than possibly knowing the final value sooner).

For CLS, I see why monitoring for the entire lifecycle of the page is useful, but it's so much more useful when focused on layout shifts during page load.

Yes and no. I see a ton of developers complaining that PSI reports their CLS as being high in the field, but 0 according to lab data (which only considers the initial load). Also, our research shows that a significant portion of layout shifts do occur post-load, so it's a point we don't want to ignore.

Also, the Web Vitals program evolves, our plan is to extend more and more metrics to cover the entire page lifecycle and not just the load experience. CLS was our first step in that direction.

For LCP, is tracking for the entire lifecycle of the page useful at all? I'm not familiar with any UI paradigms requiring optimization that would benefit from this, and can cause heavily inflated metrics for infinite scrollers.

I'm not sure what you mean? Currently LCP is only tracked until the first user interaction or the page is background or unloaded. The only exception to this is in the case of bfcache, where LCP is tracked again after a bfcache restore.

A change like this would also address reporting issues where metrics tracked long after pageload cause (i.e. in GA).

Yeah, this is just an issue with how GA manages sessions—I also think it's just an issue with classic GA and Universal Analytics. I don't think it's an issue with GA4, as I know they try to measure session in there.

@Tiggerito
Copy link

I'm trying to match my CLS score with what the tools produce. Especially the Google Search Console which is CRuX, which is Page Speed Insights. My reports will be compared with them. And most importantly are to guide website owners to pass Googles page experience test.

The default CLS mode tends to be a lot higher than CRuX data. I can scroll down on a page and turn it from a Good to a Poor. Reporting based on that caused many pages to fail the test when they show up fine in the tools. Not helpful for my users.

I switched to the verbose mode and found using the first CLS was also misleading. Two initial CLS reports in a row can go from 0.003 to 0.03.

At the moment my solution is to have a 2 second timeout after the first webvitals report and go with the CLS score at that time.

It would be good to know how the tools decide the cut off moment.

I'd also be interested in the same for LCP.

@philipwalton
Copy link
Member

The default CLS mode tends to be a lot higher than CRuX data.

This should not happen. These tools both use the same APIs, but CrUX (unlike the web APIs) can also report issues in sub frames, which means if anything the CrUX data should be higher.

Can you provide any examples to help use determine why the data reported from this library is higher?

I can scroll down on a page and turn it from a Good to a Poor. Reporting based on that caused many pages to fail the test when they show up fine in the tools. Not helpful for my users.

This is also true for CrUX data.

It sounds to me like you're most likely looking at lab data (available in PageSpeed Insights lab data report), which uses Lighthouse and only measures up until page load and doesn't scroll the page.

@Tiggerito
Copy link

Your right, I think I was looking at lab data, not the CrUX RUM data.

I've now decided to go with the final CLS when the page is first hidden. Makes it simpler and closer to what CrUX would report.

@tunetheweb
Copy link
Member

Similar to #320 and, as per there and the discussion above, our aim is to measure the full life cycle as much as possible. so closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants