-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add firstInteractive #2013
Conversation
@@ -88,7 +88,10 @@ class TraceOfTab extends ComputedArtifact { | |||
firstMeaningfulPaint = lastCandidate; | |||
} | |||
|
|||
const onLoad = keyEvents.find(e => e.name === 'MarkLoad' && e.ts > navigationStart.ts); | |||
const onLoad = frameEvents.find(e => e.name === 'loadEventStart' && e.ts > navigationStart.ts); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@paulirish MarkLoad
was not onload in most cases I saw during my investigations, what's supposed to be the difference between it and loadEventStart?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
markload should actually line up with loadEventEnd. if its not, then it must be selecting the wrong frame. Happy to have it change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh so it is fired per frame afterall then 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is so great. excited.
I want to make sure we understand how much this changes things, especially for the PWA test. Can you point to some data to clarify what the delta in TTIvAlpha and TTFIvBeta is?
also, curious if we should validate that this FI matches the results from https://codereview.chromium.org/2610183003 ?
@@ -131,39 +131,15 @@ class Metrics { | |||
} | |||
}, | |||
{ | |||
name: 'Time to Interactive (vAlpha)', | |||
id: 'tti', | |||
name: 'First Interactive (vAlpha)', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vBeta 😎
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
* @return {!Promise} | ||
*/ | ||
request(artifact) { | ||
request(artifact, artifacts) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pretty interesting, but I like it. leads to more clean implementations.
can you rename to allArtifacts
to clarify, since we have 3 diff sorts in here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the problem is that it breaks the cache and requiredArtifacts
dependency checks, since the computed artifact could theoretically access any artifact it wanted at this point. In this case first-interactive.js
is only accessing other computed artifacts, so it's fine, but we can't guarantee it (here or in future computed artifacts)
I have a proposal I've been thinking about for a while (it also makes typing nicer for the artifacts object). I'll throw it together quickly and put it in a PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this proposal allow for passing more than one artifact? I wanted to make consistently interactive a computed artifact too, but it needs trace + networkRecords
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this proposal allow for passing more than one artifact?
No, but we definitely need that too. I can add it in a followup PR (or you can if you want). Basically computedArtifact's _cache
needs to support a tuple as a key instead of just objects directly. There's no really efficient way to do that in JS today, but luckily these caches all have like one item in them :)
|
||
// Parameters (in ms) for log-normal CDF scoring. To see the curve: | ||
// https://www.desmos.com/calculator/uti67afozh | ||
const SCORING_POINT_OF_DIMINISHING_RETURNS = 1700; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
presuming these are spitballed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I did it based on my hypothetical grading rubric from A+ -> F, each letter gets 20 point range with the median (C) being "passing" but not good. By this scale, A+ is basically the point of diminishing returns and A- extends to around 4 seconds.
|
||
const extendedInfo = { | ||
formatter: Formatter.SUPPORTED_FORMATS.NULL, | ||
value: {areLatenciesAll3G, allRequestLatencies, isFast, timeToInteractive} | ||
value: {areLatenciesAll3G, allRequestLatencies, isFast, ttfiValue} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's call it the full timeToFirstInteractive
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
const MAX_QUIET_WINDOW_SIZE = 5000; | ||
const TRACE_BUSY_MSG = 'trace was busy the entire time'; | ||
|
||
class FirstInteractive extends ComputedArtifact { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets add a fileoverview or other description that links to this doc and briefly describes wtf this does
https://docs.google.com/document/d/1GGiI9-7KeY3TPqS3YT271upUVimo-XiL5mwWorDUD4c/edit#
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 moved it up from compute_ and added some detail
const onLoad = keyEvents.find(e => e.name === 'MarkLoad' && e.ts > navigationStart.ts); | ||
const onLoad = frameEvents.find(e => e.name === 'loadEventStart' && e.ts > navigationStart.ts); | ||
const domContentLoaded = frameEvents.find( | ||
e => e.name === 'domContentLoadedEventStart' && e.ts > navigationStart.ts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for both this and loadEvent
, usually the End
is used, but in this case, it actually makes more sense for us to focus on the browser's POV of these events... so 👍 on using Start
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually......
The doc uses dclEnd rather than dclStart.
Since it's used as fi = Math.max(DCL, FI)
... the End value seems much more accurate. (Handlers are setup during DCL, and the main thread is unavailable during this time)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah originally I was using DCL as the start point for search when using start made some sense, but now that it's just the actual firstInteractive point, using end is much better
FYi I just finished a branch that moves networkRecords into a computed
artifact. Shouldn't matter much for this though.
|
I'm not sure how this would work, so interested in seeing it :) |
rebased PTAL :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly looking great. A few small adjustments..
const onLoad = keyEvents.find(e => e.name === 'MarkLoad' && e.ts > navigationStart.ts); | ||
const onLoad = frameEvents.find(e => e.name === 'loadEventStart' && e.ts > navigationStart.ts); | ||
const domContentLoaded = frameEvents.find( | ||
e => e.name === 'domContentLoadedEventStart' && e.ts > navigationStart.ts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually......
The doc uses dclEnd rather than dclStart.
Since it's used as fi = Math.max(DCL, FI)
... the End value seems much more accurate. (Handlers are setup during DCL, and the main thread is unavailable during this time)
} | ||
|
||
const longTasks = TracingProcessor.getMainThreadTopLevelEvents(traceModel, trace, FMP) | ||
.filter(evt => evt.end - evt.start >= 50 && evt.end > FMP); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
extract 50 to LONG_TASKS_THRESHOLD const
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
throw new Error('trace not at least 5 seconds longer than FMP'); | ||
} | ||
|
||
const longTasks = TracingProcessor.getMainThreadTopLevelEvents(traceModel, trace, FMP) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename to longTasksAfterFMP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
const startEvent = trace.traceEvents.find(event => event.name === 'TracingStartedInPage'); | ||
const mainThread = TraceProcessor._findMainThreadFromIds(model, startEvent.pid, startEvent.tid); | ||
|
||
// TODO(bckenny): filter for top level slices ourselves? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we fix our longstanding #489 while you're here?
// See https://github.com/GoogleChrome/lighthouse/issues/489
const schedulableTaskTitle = 'TaskQueueManager::ProcessTaskFromWorkQueue';
const events = mainThread.sliceGroup.slices.filter(slice => {
return slice.title === schedulableTaskTitle &&
slice.end > startTime &&
slice.start < endTime;
});
Just tested this guy out and he's good. This also matches what we see in https://codereview.chromium.org/2610183003/patch/140001/150003
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -26,7 +26,7 @@ const dbwResults = require('../../fixtures/dbw_tester-perf-results.json'); | |||
/* eslint-env mocha */ | |||
describe('metrics events class', () => { | |||
it('exposes metric definitions', () => { | |||
assert.equal(Metrics.metricsDefinitions.length, 12, '12 metrics not exposed'); | |||
assert.equal(Metrics.metricsDefinitions.length, 10, '10 metrics not exposed'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
feel free to change this test. looks like it's getting annoying for you. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, yes it was :)
@@ -131,39 +131,15 @@ class Metrics { | |||
} | |||
}, | |||
{ | |||
name: 'Time to Interactive (vAlpha)', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's keep TTI in here for now.
i'd like them to live side-by-side for a while for folks to get comfortable.
then we can drop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
* @return {number} The last index of the lonely task envelope, | ||
* -1 if not a lonely task | ||
*/ | ||
static getLastLonelyTaskIndex(longTasks, i, FMP, traceEnd) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Over in #2023 you're using a candidate + while loop pattern that I find considerably easier to follow the logic of.
Is it possible to use the same pattern here?
take sorted list of tasks, lastLonelyCandidate is the first, while (lastLonelyCandidate.start < windowEnd)
... kind of a thing..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for just getLastLonelyTaskIndex
or the whole thing? The trickier piece here that makes everything annoying is the complicated base case differentiation. I planned to fix that by treating the entire span before FMP as one giant long task which will make the base case the same as considering the first long task, but not sure if I should do in this PR. If it's too hard to grok right now I can do that switch now instead of a follow-up, wdyt?
FWIW, I totally agree it's not nearly as easy to follow, but at least a tiny bit that's because lonely tasks are way harder to explain that simple quiet windows too :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is difficult to follow right now. "getLastLonelyTaskIndex" is kind of inscrutable...is it looking for the last one in the same envelope as the one at index i
?
I planned to fix that by treating the entire span before FMP as one giant long task which will make the base case the same as considering the first long task
from this description, at least, this sounds like a generalization that will make the code harder to follow, but the proof is in the implementation :) If there are three(?) different zones (pre-FMP, FMP+5s, post-FMP+5s), what about explicitly handling them individually?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I'll work on rewriting it
const SCORING_POINT_OF_DIMINISHING_RETURNS = 1700; | ||
const SCORING_MEDIAN = 10000; | ||
// This aligns with the external TTI targets in https://goo.gl/yXqxpL | ||
const SCORING_TARGET = 5000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to keep this at all? Meaning even less clear with the new curve :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 I'll remove
const trace = artifacts.traces[Audit.DEFAULT_PASS]; | ||
return artifacts.requestFirstInteractive(trace).then(firstInteractive => { | ||
const timeToFirstInteractive = firstInteractive.timeInMs; | ||
const isFast = timeToFirstInteractive < MAXIMUM_TTI; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename MAXIMUM_TTI
(and other references to TTI in this file)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
* @see https://docs.google.com/document/d/1GGiI9-7KeY3TPqS3YT271upUVimo-XiL5mwWorDUD4c/edit# | ||
* | ||
* First Interactive marks the first moment when a website is minimally interactive: | ||
* > Enough (but maybe not all) UI components shown on the screen are interactive |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is including this without a disclaimer misleading since this is the goal of the heuristic but is not actually tested here? If I were reading this I would be scanning the code for where event listeners are instrumented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added disclaimer
* long tasks that are not "lonely". | ||
* | ||
* > t = time in seconds since FMP | ||
* > N = 4 * e^(-0.045 * t) + 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they're in the doc, but example values (and/or desmos curve) would be good here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
* > t = time in seconds since FMP | ||
* > N = 4 * e^(-0.045 * t) + 1 | ||
* > a "lonely" task is an envelope of 250ms that contains a set of long tasks that have at least | ||
* 1 second of padding before and after the envelope that contain no long tasks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should include the FMP + 5s def?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
* @param {{traceEvents: !Array<!Object>}} trace | ||
* @param {number=} startTime Optional start time (in ms) of range of interest. Defaults to trace start. | ||
* @param {number=} endTime Optional end time (in ms) of range of interest. Defaults to trace end. | ||
* @return {!Array<{start: number, end: number}>} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slice.duration
is used in getMainThreadTopLevelEventDurations
, not sure if there's anything else needed on these objects
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added duration, or I could switch it to !Object
if you prefer :)
return; | ||
} | ||
|
||
slices.forEach(slice => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're harshing EQT perf here :S
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh puh-lease if we cared about perf here it wouldn't have taken months to realize the trace model was being constructed multiple times let alone that old TTI runs this forEach, filter, and sort about 100 times more than necessary :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh puh-lease if we cared about perf here it wouldn't have taken months to realize the trace model was being constructed multiple times let alone that TTI runs this forEach, filter, and sort about 100 times more than necessary :P
I didn't say anything about TTI :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
touché
return slice.end > startTime && slice.start < endTime; | ||
}); | ||
|
||
events.sort((a, b) => a.start - b.start); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe at least sort in caller so getMainThreadTopLevelEventDurations
doesn't sort twice?
* @param {number=} endTime Optional end time (in ms) of range of interest. Defaults to trace end. | ||
* @return {!Array<{start: number, end: number}>} | ||
*/ | ||
static getMainThreadTopLevelEvents(model, trace, startTime = 0, endTime = Infinity) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can also use model.bounds.min
and model.bounds.max
for defaults here for slightly more self-evident values
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, that actually feels a little less self-evident to me as someone who doesn't know the shape and meaning of the model properties well, we also wouldn't be able to specify them in the function signature anymore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, that actually feels a little less self-evident to me as someone who doesn't know the shape and meaning of the model properties well
-Infinity
for startTime
, then?
we also wouldn't be able to specify them in the function signature anymore
that would actually work. You can refer to earlier params in default parameters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-Infinity for startTime, then?
👍
that would actually work.
TIL
function noway({x, y} = {x: 3, y: 3}, woah = x) {
console.log('boom', woah)
}
> noway()
boom 3
* @return {number} The last index of the lonely task envelope, | ||
* -1 if not a lonely task | ||
*/ | ||
static getLastLonelyTaskIndex(longTasks, i, FMP, traceEnd) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is difficult to follow right now. "getLastLonelyTaskIndex" is kind of inscrutable...is it looking for the last one in the same envelope as the one at index i
?
I planned to fix that by treating the entire span before FMP as one giant long task which will make the base case the same as considering the first long task
from this description, at least, this sounds like a generalization that will make the code harder to follow, but the proof is in the implementation :) If there are three(?) different zones (pre-FMP, FMP+5s, post-FMP+5s), what about explicitly handling them individually?
* > N = 4 * e^(-0.045 * t) + 1 | ||
* > a "lonely" task is an envelope of 250ms that contains a set of long tasks that have at least | ||
* 1 second of padding before and after the envelope that contain no long tasks. | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also document DOMContentLoaded
bound?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
ab7e6fb
to
7729fb1
Compare
PTAL :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah this is sooooo much improved. thanks!
some naming bikeshedding but i'm otherwise happy
let lastEndTime = -Infinity; | ||
let currentGroup = null; | ||
|
||
// consider all tasks that could possibly be part of a group starting before windowEnd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the group starting before windowEnd or the tasks?
also here it's called considerationWindowEnd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to consider all tasks before considerationWindowEnd
but we only want clusters that start before windowEnd
consider the case where window end is 15s, there's a 100ms task from 14.9-15s and a 500ms task from 15.5-16s, we need that later task to be clustered with the first so we can properly identify that main thread isn't quiet, but if that 100ms task weren't there we want to ignore it. Make sense? Not sure how to condense that explanation into a better comment though suggestions welcome :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aside: would you consider considerationWindowEnd
to be a clusteringWindowEnd
?
honestly i think you should just use the two sentences you just said ^ as the comment. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haha alright sg
|
||
// consider all tasks that could possibly be part of a group starting before windowEnd | ||
const considerationWindowEnd = windowEnd + MIN_TASK_GROUP_PADDING; | ||
const couldTaskBeInGroup = task => task && task.start < considerationWindowEnd; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couldTaskBelongToGroup
? unless tasks can be in multiple groups
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
* @param {number} windowEnd | ||
* @return {!Array<{start: number, end: number, duration: number}>} | ||
*/ | ||
static getTaskGroupsInWindow(tasks, startIndex, windowEnd) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
another naming bikeshed. wdyt about Cluster
/TaskCluster
rather than group? Implies the proximity that's part of this..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I like cluster more, 👍
for (let i = startIndex; couldTaskBeInGroup(tasks[i]); i++) { | ||
const task = tasks[i]; | ||
|
||
// check if enough time has passed to start a new group |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if enough time has elapsed, we'll create a new group
or cluster
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
const isTooCloseToFMP = group => group.start < FMP + MIN_TASK_GROUP_FMP_DISTANCE; | ||
const isTooLong = group => group.duration > MAX_TASK_GROUP_DURATION; | ||
const isBadTaskGroup = group => isTooCloseToFMP(group) || isTooLong(group); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could de-extract this guy. seems better to inline him in the .find() below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree it's a bit much and I started with inlining when my names were shorter, but inlining now creates a linebreak and seems marginally better to keep the two conditions on the same line and keep each item a one-liner. still prefer a two-line inliner over this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eh. not a huge deal to me.
static getTaskGroupsInWindow(tasks, startIndex, windowEnd) { | ||
const groups = []; | ||
|
||
let lastEndTime = -Infinity; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
previousTaskEndTime
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
alright @brendankenny all you :) |
@brendankenny 🐜 ⛰ 🐒 🐈 🐩 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Lots of comments from me, but all relatively superficial. This is looking just about ready to go
score, | ||
rawValue: firstInteractive.timeInMs, | ||
displayValue: `${displayValue.toLocaleString()}ms`, | ||
optimalValue: this.meta.optimalValue, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delete
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
// https://developers.google.com/web/progressive-web-apps/checklist | ||
const MAXIMUM_TTI = 10 * 1000; | ||
const MAXIMUM_TTFI = 10 * 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we were using TTCI here? Using FI as a threshold in offscreen-images.js
is probably fine, but the banner isPWA
check probably should use CI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll revert
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nope I'm keeping again :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nope I'm keeping again :)
I think @paulirish took the AI to get the message out about this for you :)
slice.start < endTime; | ||
}); | ||
|
||
events.sort((a, b) => a.start - b.start); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do you feel about dropping sort and making callers do it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not a fan, but I can live
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- i would prefer to do the sort as early as possible as we dont want audits to have to know things are unsorted
- i'm decently confident that the sort is done by tracingmodel already, so TBH this line can probably just be nuked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm adding tests with actual traces today now that it's in good shape so I'll find out and nuke the sort in caller if necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm adding tests with actual traces today now that it's in good shape so I'll find out and nuke the sort in caller if necessary
was this 👍 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep pre-sorted by model!
@@ -0,0 +1,73 @@ | |||
/** | |||
* @license | |||
* Copyright 2017 Google Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you have confusing choices in license headers :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's whatever it was copied from :)
* | ||
* If this timestamp is earlier than DOMContentLoaded, use DOMContentLoaded as firstInteractive. | ||
*/ | ||
class FirstInteractive extends ComputedArtifact { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will you add to https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/closure/typedefs/ComputedArtifacts.js with /** @type {function(!Trace): !Promise<{timeInMs: number, timestamp: number}>} */
(I believe)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
// task from 15.5-16s, we need that later task to be clustered with the first so we can properly | ||
// identify that main thread isn't quiet. | ||
const clusteringWindowEnd = windowEnd + MIN_TASK_CLUSTER_PADDING; | ||
const couldTaskBelongToCluster = task => task && task.start < clusteringWindowEnd; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe isInClusteringWindow
? "could" is a low bar for descriptiveness :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've done zero performance testing, so I don't have a good basis for worrying, but if calculating FirstInteractive is (relatively) slow for some sites, doing out-of-bounds array accessing really isn't going to help things. What about something like an explicit regular i < tasks.length
here and then do the if (!couldTaskBelongToCluster) break;
(dropping the falsey task
check) inside the loop?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
// if enough time has elapsed, we'll create a new cluster | ||
if (task.start - previousTaskEndTime > MIN_TASK_CLUSTER_PADDING) { | ||
if (currentCluster) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you instead push upon creation of the cluster array (since it's known there's at least one task for the new cluster at this point) or is that going to miss one the other way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
return {start, end, duration, tasks}; | ||
}) | ||
// filter out clusters that started after the window because of our clusteringWindowEnd | ||
.filter(cluster => cluster.start < windowEnd); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, nice 🎖
const task = tasks[i]; | ||
|
||
// if enough time has elapsed, we'll create a new cluster | ||
if (task.start - previousTaskEndTime > MIN_TASK_CLUSTER_PADDING) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm very curious why you went this way instead of clustering by 250ms and then doing a isNextClusterTooClose
or whatever with 1000ms. Was there something easier this way or just felt more natural to you or...?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what you mean by clustering by 250ms since in this new way of viewing things that's more a condition on a task cluster rather than how a cluster is defined, what you're saying sounds mostly to me like why did I change it away from what I had before :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what you mean by clustering by 250ms since in this new way of viewing things that's more a condition on a task cluster rather than how a cluster is defined, what you're saying sounds mostly to me like why did I change it away from what I had before :)
yeah, I was curious about your reason for switching
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh this just seemed more straightforward after talking it out with paul, easier to explain exactly what's happening without looking at code
const start = tasks[0].start; | ||
const end = tasks[tasks.length - 1].end; | ||
const duration = end - start; | ||
return {start, end, duration, tasks}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason to keep tasks
around? (not on the return type and looks like not used, so OK to drop?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was just for debugging, can drop
I think there is one case this is missing: once a task is selected as the start of a window, So, e.g. (in test form) mainThreadEvents = [
{start: 5500, end: 5600},
{start: 6000, end: 6100},
];
const result = firstInteractive.computeWithArtifacts({}, {}, {
timings: {
firstMeaningfulPaint: 1000,
domContentLoaded: 800,
traceEnd: 24000,
},
timestamps: {
navigationStart: 0,
},
}); says that FirstInteractive is at 5600, even though that's less than 1000ms away from the next long task, so it should be 6100 (correct me if the example is wrong; it's late :) I believe the fix could be as simple as adding a if (i + 1 < longTasks.length &&
longTasks[i + 1].start - windowStart <= MIN_TASK_CLUSTER_PADDING) {
continue;
} after the |
yup good call, didn't have a test for this before and got missed in refactor |
alright I believe all feedback addressed, added a couple tests for called out cases and the ones @brendankenny pointed out, added a new trace with 5s quiet on pwa.rocks to assert TTFI is correct with actual trace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a lovely implementation.
Three tiny (and optional) nits, but otherwise looks great to me
} | ||
|
||
if (!FMP || !DCL) { | ||
throw new Error(`No ${FMP ? 'domContentLoaded' : 'firstMeaningfulPaint'} event in trace`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we throw in traceOfTab
just to ruin all dependent audits' days?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm down
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only moved FMP since DCL is specific to TTFI but everything needs FMP basically
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well, nevermind we explicitly test that trace-of-tab can handle those traces without paints...reverting for now and we can do in a followup PR if necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reverting for now and we can do in a followup PR if necessary
I guess having #1981 open will have to suffice for now :(
// task from 15.5-16s, we need that later task to be clustered with the first so we can properly | ||
// identify that main thread isn't quiet. | ||
const clusteringWindowEnd = windowEnd + MIN_TASK_CLUSTER_PADDING; | ||
const isInClusteringWindow = task => task && task.start < clusteringWindowEnd; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: task
will always be defined here now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
}); | ||
|
||
it('throws on short traces', () => { | ||
return firstInteractive.compute_({traceEvents: tooShortTrace}, computedArtifacts).then(() => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: for these, can this call computedArtifacts.requestFirstInteractive({traceEvents: tooShortTrace})
so it doesn't have to call the underlying compute_
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
oh, sorry for the conflict :S |
geez even after approval you're givin' me a hard time ;) |
032bc61
to
008692d
Compare
⏱🖌🏁⏱ |
👏 Great work @patrickhulce && @brendankenny |
Adds firstInteractive as defined here. Also changes all references from TTI to first-interactive. Removing TTI and replacing it with visually ready is in a different pending PR (currently a blocking bug in speedline I'm investigating)