Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(fr): compute timespan saving with observed throughput #13478

Merged
merged 5 commits into from
Dec 15, 2021

Conversation

adamraine
Copy link
Member

Followup to #13323

This will use the observed (real) throughput to calculate savings in timespan mode with desktop config (or any config with 0 throughput). Theoretically, any variance in the observed throughput would imply variance in performance for these cases.

@adamraine adamraine requested a review from a team as a code owner December 9, 2021 20:30
@adamraine adamraine requested review from connorjclark and removed request for a team December 9, 2021 20:30
/**
* @param {number} wastedBytes
*/
simulateTimespan(wastedBytes) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to move the logic here for two reasons:

  • Eventually we may want a true timespan simulator to calculate more detailed savings.
  • It was easier to pipe the observedThroughput here.

I won't die on this hill though, we can also consider @brendankenny's design: #13323 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't love this name since 'timespan' is generally something else in a lighthouse context.

i still like the computeWastedMsWithWastedBytes name tbh. how bout dat

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ya SGTM

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to move the logic here for two reasons:

  • Eventually we may want a true timespan simulator to calculate more detailed savings.
  • It was easier to pipe the observedThroughput here.

we can also consider @brendankenny's design: #13323 (comment)

Yeah the difference is mostly just where it's defined. There's always the danger of YAGNI, but in addition to it being a simple solution:

  • Simulation.Options is where the other network analysis values are already passed in
  • we only have one place where we construct a simulator and it's been like that for a long time and no plans to change it

so it works for me.

If we switch devtools throttling to use the observed throughput (as discussed in #13323) it may make more sense to do this overriding of throughput in load-simulator and only have the single throughput option, but we can think about that when/if that change comes up.

@@ -153,6 +153,7 @@ declare module Gatherer {
interface Options {
rtt?: number;
throughput?: number;
observedThroughput: number;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made this required so we don't have to set a default value.

/**
* @param {number} wastedBytes
*/
simulateTimespan(wastedBytes) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't love this name since 'timespan' is generally something else in a lighthouse context.

i still like the computeWastedMsWithWastedBytes name tbh. how bout dat

@paulirish
Copy link
Member

non-reviewer lgtm!

@adamraine adamraine merged commit 4afafae into master Dec 15, 2021
@adamraine adamraine deleted the fr-observed-throughput branch December 15, 2021 20:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants