Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new_audit: add responsiveness metric for timespans #13917

Merged
merged 14 commits into from
Apr 27, 2022
Merged

Conversation

brendankenny
Copy link
Member

Part of #13916

Adds an experimental version of https://web.dev/responsiveness/ for timespans.

Also adds a new timespan test trace from a page with interaction events coming from both the main frame and a child iframe.

static async audit(artifacts, context) {
const gatherContext = artifacts.GatherContext;
const {settings} = context;
// TODO: only timespan currently supported.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a lot of these TODOs are items in #13916

*/
static get meta() {
return {
id: 'interaction-to-next-paint',
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@paulirish do you have feelings about what we should call the audit if we want the name to stand the test of time?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on the crux side we're using an experimental_ prefix in both BQ and API. So while we've never done that here, that'd be consistent and fairly reasonable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

experimental-interaction-to-next-paint it is :)

];

/** @type {Record<string, LH.Config.AuditRefJson[]>} */
const frCategoryAuditRefExtensions = {
'performance': [
{id: 'uses-responsive-images-snapshot', weight: 0},
{id: 'interaction-to-next-paint', weight: 30, group: 'metrics', acronym: 'INP'},
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the same weight as TBT so it's 50/50, but maybe we don't care because timespan reports don't show the weighted score?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is going to be a diagnostic in navigation mode, it might be helpful to make the score 0, but someone reading the config wouldn't know that it's still important for timespans.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is going to be a diagnostic in navigation mode, it might be helpful to make the score 0, but someone reading the config wouldn't know that it's still important for timespans.

yeah, makes sense, done. The scoring is confusing at the JSON level in timespans, but scoring is confusing in timespans, so that's nothing new :)

// TODO: only timespan currently supported.
if (gatherContext.gatherMode !== 'timespan' ||
// TODO: simulated timespan isn't supported by lantern.
settings.throttlingMethod === 'simulate') {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could add a supportedModes: ['timespan'] to the meta for now.

Also throttlingMethod should always be overriden in timespan mode:

function overrideSettingsForGatherMode(settings, context) {
if (context.gatherMode === 'timespan') {
if (settings.throttlingMethod === 'simulate') {
settings.throttlingMethod = 'devtools';
}
}
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added supportedModes: ['timespan'], but kept the explicit simulate check. I'd like to keep the throttlingMethod support independent of the gatherMode until we figure out how good the responsiveness simulation can be

@brendankenny
Copy link
Member Author

hmm:

Summary of all failing tests
FAIL lighthouse-core/test/fraggle-rock/scenarios/api-test-pptr.js (44.918 s)
  ● Fraggle Rock API › startTimespan › should compute results from timespan after page load

    expect(received).toMatchInlineSnapshot(snapshot)

    Snapshot name: `Fraggle Rock API startTimespan should compute results from timespan after page load 2`

    Snapshot: 18
    Received: 19

      142 |       expect(auditResults.length).toMatchInlineSnapshot(`45`);
      143 |
    > 144 |       expect(notApplicableAudits.length).toMatchInlineSnapshot(`18`);
          |                                          ^
      145 |       expect(notApplicableAudits.map(audit => audit.id)).not.toContain('total-blocking-time');
      146 |
      147 |       expect(erroredAudits).toHaveLength(0);

      at Object.<anonymous> (lighthouse-core/test/fraggle-rock/scenarios/api-test-pptr.js:144:42)

ok, so update:

-      expect(notApplicableAudits.length).toMatchInlineSnapshot(`18`);
+      expect(notApplicableAudits.length).toMatchInlineSnapshot(`19`);

and now

FAIL lighthouse-core/test/fraggle-rock/scenarios/api-test-pptr.js (45.002 s)
  ● Fraggle Rock API › startTimespan › should compute results from timespan after page load

    expect(received).toMatchInlineSnapshot(snapshot)

    Snapshot name: `Fraggle Rock API startTimespan should compute results from timespan after page load 2`

    Snapshot: 19
    Received: 18

      142 |       expect(auditResults.length).toMatchInlineSnapshot(`45`);
      143 |
    > 144 |       expect(notApplicableAudits.length).toMatchInlineSnapshot(`19`);
          |                                          ^
      145 |       expect(notApplicableAudits.map(audit => audit.id)).not.toContain('total-blocking-time');
      146 |
      147 |       expect(erroredAudits).toHaveLength(0);

      at Object.<anonymous> (lighthouse-core/test/fraggle-rock/scenarios/api-test-pptr.js:144:42)

@adamraine any thoughts?


const ExperimentalInteractionToNextPaint =
require('../../../audits/metrics/experimental-interaction-to-next-paint.js');
const interactionTrace = require('../../fixtures/traces/timespan-responsiveness.trace.json');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add an m10X to this filename.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add an m10X to this filename.

done

@adamraine
Copy link
Member

@adamraine any thoughts?

I'll investigate

Copy link
Member

@adamraine adamraine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM, excited to see this in action

lighthouse-core/computed/metrics/responsiveness.js Outdated Show resolved Hide resolved
Co-authored-by: Adam Raine <6752989+adamraine@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants