Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(metrics): move FMP to computed artifact #4951

Merged
merged 7 commits into from
Apr 12, 2018
Merged

Conversation

patrickhulce
Copy link
Collaborator

progress on #4872

moves FMP and also moves all the junk that used to be in FMP to be in a metrics.js audit that will collect all metrics

Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly good! just a few ideas.


for (const [traceEventName, timing] of Object.entries(traceOfTab.timings)) {
const uppercased = traceEventName.slice(0, 1).toUpperCase() + traceEventName.slice(1);
const metricName = `raw${uppercased}`;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

raw => trace

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

metrics.push(Object.assign({metricName}, values));
}

for (const [traceEventName, timing] of Object.entries(traceOfTab.timings)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// Include all timings from traceOftab

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

const metricsMap = {firstContentfulPaint, firstMeaningfulPaint, timeToInteractive};

const metrics = [];
for (const [metricName, values] of Object.entries(metricsMap)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// Include these specific metrics

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

const firstContentfulPaint = await artifacts.requestFirstContentfulPaint(metricComputationData);
const firstMeaningfulPaint = await artifacts.requestFirstMeaningfulPaint(metricComputationData);
const timeToInteractive = await artifacts.requestConsistentlyInteractive(metricComputationData);
const metricsMap = {firstContentfulPaint, firstMeaningfulPaint, timeToInteractive};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move this metricsMap line to right above the for loop. move the const metrics line up higher.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -24,6 +24,16 @@ function safeGet(object, path) {
return object;
}

function findValueInMetricsAuditFn(metricName, valueName) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

valueName => timingOrTimestamp

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@patrickhulce
Copy link
Collaborator Author

patrickhulce commented Apr 10, 2018

😷 🗡 💨(that's supposed to be a ninja that's addressing feedback)

Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wfm

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quuuuuestions for you

"items": [
{
"timing": 493,
"timestamp": 185603812639.80002,
"timing": 479,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the changes here? Since regenerating from the same source, shouldn't they stay the same?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh good catch!! this was actually a bug introduced back with TTCI changes which fail when scoring params are not given in our hacky TTCI.audit(artifacts) approach

@@ -8,8 +8,8 @@
const Metrics = require('../../../lib/traces/pwmetrics-events');
const assert = require('assert');

const dbwTrace = require('../../fixtures/traces/dbw_tester.json');
const dbwResults = require('../../fixtures/dbw_tester-perf-results.json');
const dbwTrace = require('../../results/artifacts/defaultPass.trace.json');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename these variables (and in asset-saver) to exampleTrace and sampleV2Results or something like that?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, why? they're still on the dbw tester so the names are still relevant :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ha, somehow I never noticed that's where they were from. Never mind :)


// Include all timestamps of interest from trace of tab
for (const [traceEventName, timing] of Object.entries(traceOfTab.timings)) {
const uppercased = traceEventName.slice(0, 1).toUpperCase() + traceEventName.slice(1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol no hope of inferring type

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure there is! :D

string -> string -> string + string -> string => string

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ha, I meant of metrics :P

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh lol, yes :)

assert.equal(result.displayValue, '530\xa0ms');
assert.equal(result.rawValue, 529.9);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we preserve these tests (put them in metrics-test.js, maybe?) Many of them were hard-won results from problematic traces, and having them in our tests make any future deviations from our current trace interpretations more obviously deliberate

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by "these tests" do you mean the assertions on the correct timestamp/navstart/fcp rather than just the FMP? AFAICT I didn't regress coverage of any test scenarios

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean the assertions on the correct timestamp/navstart/fcp rather than just the FMP?

yes

AFAICT I didn't regress coverage of any test scenarios

is there elsewhere we test for navstart/fCP/fmpFellBack for these problematic traces (lateTracingStartedTrace, badNavStartTrace, preactTrace, etc) specifically?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ha, well that's perfect then

}, computedArtifacts);
}

/* eslint-env mocha */
describe('Performance: first-meaningful-paint audit', () => {
const context = {options, settings: {throttlingMethod: 'provided'}};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should these tests include simulated runs or should most of these be kicked up to trace-of-tab tests?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm keeping all of the simulated/provided/devtools test distinctions inside the computed artifact versions and keeping all the audit tests on provided to keep things simpler there. does that sound good to you?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, it's not terrible, and reduces churn. I was just thinking if we wrote them today really this unit test would probably just verify that it can get results for the various throttling methods, while the nuts and bolts would be relegated to the computed artifact/lantern computed artifact unit tests.

WDYT? This can also just be some legacy we live with and some day maybe clean up our unit test organization

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, might as well have a real fresh start this time around :D

TTFI.audit(artifacts).catch(() => ({rawValue: 0})),
TTCI.audit(artifacts).catch(() => ({rawValue: 0})),
TTFI.audit(artifacts, context).catch(() => ({rawValue: 0})),
TTCI.audit(artifacts, context).catch(() => ({rawValue: 0})),
Copy link
Collaborator Author

@patrickhulce patrickhulce Apr 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was the bug you caught @brendankenny

I'll do a followup PR to make these use computed artifacts instead 👍

🤔 I wonder if we should have consider the sample LHR to be a golden file that all changes to must be approved in review to catch these sooner

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should have consider the sample LHR to be a golden file that all changes to must be approved in review to catch these sooner

that's a good idea. I wonder if there's some way to enforce that? Maybe have Travis regenerate sample_v2.json and diff with the checked in version? cc @paulirish

Copy link
Collaborator Author

@patrickhulce patrickhulce Apr 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that was my thought

yarn update:sample-json
git diff path/to/sample_v2.json --exit-code

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

peek at #4956

@patrickhulce
Copy link
Collaborator Author

just as an explicit notice for the purposes of #4934, I'm currently waiting response to review comments :)

@@ -138,7 +138,8 @@ class TraceOfTab extends ComputedArtifact {
firstPaintEvt: firstPaint,
firstContentfulPaintEvt: firstContentfulPaint,
firstMeaningfulPaintEvt: firstMeaningfulPaint,
onLoadEvt: onLoad,
loadEvt: load,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will you add the new metrics audit and onLoadEvt -> loadEvt to #4333. We should probably have a list and we haven't been adding "breaking" to our PR titles (oh well)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}, computedArtifacts);
}

/* eslint-env mocha */
describe('Performance: first-meaningful-paint audit', () => {
const context = {options, settings: {throttlingMethod: 'provided'}};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, it's not terrible, and reduces churn. I was just thinking if we wrote them today really this unit test would probably just verify that it can get results for the various throttling methods, while the nuts and bolts would be relegated to the computed artifact/lantern computed artifact unit tests.

WDYT? This can also just be some legacy we live with and some day maybe clean up our unit test organization

assert.equal(result.displayValue, '530\xa0ms');
assert.equal(result.rawValue, 529.9);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean the assertions on the correct timestamp/navstart/fcp rather than just the FMP?

yes

AFAICT I didn't regress coverage of any test scenarios

is there elsewhere we test for navstart/fCP/fmpFellBack for these problematic traces (lateTracingStartedTrace, badNavStartTrace, preactTrace, etc) specifically?

@patrickhulce
Copy link
Collaborator Author

alright @brendankenny I think all feedback has been addressed

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One last request *ducks*

But otherwise LGTM!

traces: {[Audit.DEFAULT_PASS]: {traceEvents}},
devtoolsLogs: {[Audit.DEFAULT_PASS]: []},
}, computedArtifacts);
const context = {options, settings: {throttlingMethod: 'provided'}};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe also a simulated test (and devtools? Even though it's currently the same execution path as this) so it checks that it can also successfully call through?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@brendankenny
Copy link
Member

💥 💥

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants