-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core(driver): wait for LCP event #11371
Conversation
@@ -54,6 +54,7 @@ const throttling = { | |||
const defaultSettings = { | |||
output: 'json', | |||
maxWaitForFcp: 30 * 1000, | |||
maxWaitForLcp: 40 * 1000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This timing uses #11180 (comment) as a guideline but beyond that is pretty arbitrary. Is there a more definitive way to choose this timing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah sorry @adamraine my comment was misleading. I don't think we actually need this to be a separate param compared to maxWaitForLoad
, but I now that I'm looking at all the pieces, I kinda like the fine grained control and explicit it offers, so this WFM :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of the exact time value? 40s was basically a guess, would making the default value the same as maxWaitForLoad
make more sense?
}); | ||
|
||
describe('._waitForLcp', () => { | ||
it('should not resolve until LCP fires', async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These test cases are pretty much a copy/paste from _waitForFcp
cases above. Would it make more sense to combine these two sections into a section for _waitForLifecycleEvent
and add test case to differentiate FCP and LCP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Main concern: does this still fail the entire run given no LCP? I think Patrick was suggesting it should only impact the LCP metric/audits, not the entire run.
@connorjclark Doesn't the current behavior only fail audits dependent on LCP? |
assigning myself to remind me take a look :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @adamraine awesome job navigating this complicated area of LH! 🎉 :) taking a step back though I think we need to adjust our approach a bit.
I'm having trouble seeing this lifecycle event being fired by latest chrome, are we sure the candidates are fired over the protocol like this?
These are all the lifecycle events I saw of a navigation to example.com
Once we get that sorted out, we'll also need to make sure that invalidations are fired as well because IIRC the root situation in the issues we're trying to solve is when LCP is invalidated and we need to wait for a new one (otherwise waiting for FCP should be enough). That unfortunately means we won't be able to share the same logic with FCP since we'll need to start waiting again if we receive an invalidation event and will probably need something more like lcpQuietPeriod
rather than a pauseAfterLcpMs
in the end.
I know it's hairy but it's unlikely that we'll be getting a "this LCP is final" event from Chromium because the proposal for that event seems like it's going to be based on user input rather than any quiet heuristics :/
The best next step I think would be adding a smoke test that sets lcpQuietWindow
to something high like 10s. Then the page invalidates LCP and only adds more content after 5s or so to make sure we wait for it. This hopefully helps ensure the impl is working as expected as you iterate on it.
@@ -54,6 +54,7 @@ const throttling = { | |||
const defaultSettings = { | |||
output: 'json', | |||
maxWaitForFcp: 30 * 1000, | |||
maxWaitForLcp: 40 * 1000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah sorry @adamraine my comment was misleading. I don't think we actually need this to be a separate param compared to maxWaitForLoad
, but I now that I'm looking at all the pieces, I kinda like the fine grained control and explicit it offers, so this WFM :)
* @return {Promise<{finalUrl: string, timedOut: boolean}>} | ||
*/ | ||
async gotoURL(url, options = {}) { | ||
const waitForFcp = options.waitForFcp || false; | ||
const waitForLcp = options.waitForLcp || false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I having trouble finding where we set this to true
. Were there changes to gather-runner to enable this that I'm missing? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching this, this actually explains a lot. I thought this PR was working even though there was no LCP lifecycle event, turns out waitForLcp
just wasn't enabled 😬
@patrickhulce thanks for the explanation. I made the mistake of equating |
I'm not aware of any but we can always ask DevTools team if it could be considered (or do it ourselves I happen to know a guy that nailed some awesome animation compositor reason Chromium contributions 😉 ). I think the fact that there's no "final LCP" is tricky but we are already emitting FMP candidates there so it's at least consistent. |
Converting this to a draft for now. Once a lifecycle event or other surface can be used then we can reopen. The implementation will likely look different from its current state when that happens. |
Let's close until we have a way forward. The PR# is still accessible from the open issue. |
Addresses first action item of #11180