Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(estimated-input-latency): use a 5s rolling window #4989

Merged
merged 5 commits into from
Apr 20, 2018

Conversation

patrickhulce
Copy link
Collaborator

chose a 5s window because that's what TTI uses and it gave the right "feel" on several test sites, CNN gets ~1s and is punished harshly, pwa.rocks gets ~17ms, 1s was a bit too small and harsh 10s is forgiving of 1s long tasks which is just too lenient

@patrickhulce patrickhulce changed the title core(EIL): convert estimated-input-latency to rolling window core(estimated-input-latency): use a 5s rolling window Apr 18, 2018
@paulirish
Copy link
Member

Is what we're computing here functionally equivalent to

the largest 90pctl EQT value of 5s windows between FMP to end of the trace

@patrickhulce
Copy link
Collaborator Author

Is what we're computing here functionally equivalent to
the largest 90pctl EQT value of 5s windows between FMP to end of the trace

yes

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's performance like? We could add an iterative version of getMainThreadTopLevelEventDurations() if it's slow as it's doing a lot of redundant work per window

'of latency, or less. 10% of the time a user can expect additional latency. If your ' +
'latency is higher than 50 ms, users may perceive your app as laggy. ' +
'[Learn more](https://developers.google.com/web/tools/lighthouse/audits/estimated-input-latency).',
helpText:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

possible to revert these? don't appear to have changed :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

const latencyPercentiles = TracingProcessor.getRiskToResponsiveness(
events,
startEvt.start,
startEvt.start + ROLLING_WINDOW_SIZE
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if there's a last argument of [0.9] then no need for the find() below (since dropping the extendedInfo...though if we keep EIL, there's arguably usefulness in keeping other percentiles :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

'input, in milliseconds. There is a 90% probability that a user encounters this amount ' +
'of latency, or less. 10% of the time a user can expect additional latency. If your ' +
'latency is higher than 50 ms, users may perceive your app as laggy. ' +
'[Learn more](https://developers.google.com/web/tools/lighthouse/audits/estimated-input-latency).',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we might want to tweak this, though

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tweaked!

@@ -75,8 +89,9 @@ class EstimatedInputLatency extends Audit {
static audit(artifacts, context) {
const trace = artifacts.traces[this.DEFAULT_PASS];

return artifacts.requestTraceOfTab(trace)
.then(traceOfTab => EstimatedInputLatency.calculate(traceOfTab, context));
return artifacts
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert? :D

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@patrickhulce
Copy link
Collaborator Author

patrickhulce commented Apr 19, 2018

what's performance like? We could add an iterative version of getMainThreadTopLevelEventDurations() if it's slow as it's doing a lot of redundant work per window

yeah it could definitely be better (~200ms for a CNN trace), but it's marginally better than our old TTI, and the filter on possible start points helps a lot, was planning on tackling this when adding shared eFID/EIL utils but now that it's potentially DOA maybe we just do that as followup separate

Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is what we're computing here functionally equivalent to

the largest 90pctl EQT value of 5s windows between FMP to end of the trace

yes

tight. let's add that as a comment somewhere, so its clear how we compare to the std metric.

const ninetieth = latencyPercentiles.find(result => result.percentile === 0.9);
const rawValue = parseFloat(ninetieth.time.toFixed(1));
const events = TracingProcessor.getMainThreadTopLevelEvents(tabTrace, startTime)
.filter(evt => evt.duration >= 1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since a blank trace gets a value of 16, can't we safely remove any events with a duration < 16?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well a trace with constant 16ms events would technically get a EQT of 24ms since we arbitrarily add 16ms to whatever value we compute, I'm all for doing more filtering here though, who cares about your small events

const events = TracingProcessor.getMainThreadTopLevelEvents(tabTrace, startTime)
.filter(evt => evt.duration >= 1);

const candidateStartEvts = events.filter(evt => evt.duration >= 10);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similarly, can we raise 10 to something like 50? Seems like we'd lose precision in the "you're good" area of scores, but that doesn't seem too bad.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah sgtm 👍 much faster too

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't that make the max score the PODR?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, only filtering the start event times

@brendankenny
Copy link
Member

if we're only concerned with speed there are easier ways of making this faster than sprinkling filters everywhere :P

@patrickhulce
Copy link
Collaborator Author

patrickhulce commented Apr 20, 2018

if we're only concerned with speed there are easier ways of making this faster than sprinkling filters everywhere :P

Speed is not a major concern right now. In the unit test which has 20s of CPU time split between 1000 start times and 10000 small tasks it takes ~40-50ms. Of the 200ms for CNN 180 of those are spent in trace-of-tab, not here. We're not filtering because we absolutely have to, it's more why do useless work. I don't really care either way. 1ms and 10ms don't seem to be a bottleneck speed-wise or have any meaningful impact accuracy-wise.

@patrickhulce
Copy link
Collaborator Author

patrickhulce commented Apr 20, 2018

friendly ping for re-review! I took no action on filter thresholds since it seemed to be contentious...

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! 🎢 🪟

Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎢

@patrickhulce patrickhulce merged commit 2eaafa0 into master Apr 20, 2018
@patrickhulce patrickhulce deleted the rolling_eil branch April 20, 2018 19:59
kdzwinel pushed a commit to kdzwinel/lighthouse that referenced this pull request Aug 16, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants