-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core(fr): compute timespan saving with observed throughput #13478
Conversation
/** | ||
* @param {number} wastedBytes | ||
*/ | ||
simulateTimespan(wastedBytes) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to move the logic here for two reasons:
- Eventually we may want a true timespan simulator to calculate more detailed savings.
- It was easier to pipe the
observedThroughput
here.
I won't die on this hill though, we can also consider @brendankenny's design: #13323 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't love this name since 'timespan' is generally something else in a lighthouse context.
i still like the computeWastedMsWithWastedBytes
name tbh. how bout dat
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya SGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to move the logic here for two reasons:
- Eventually we may want a true timespan simulator to calculate more detailed savings.
- It was easier to pipe the
observedThroughput
here.we can also consider @brendankenny's design: #13323 (comment)
Yeah the difference is mostly just where it's defined. There's always the danger of YAGNI, but in addition to it being a simple solution:
Simulation.Options
is where the other network analysis values are already passed in- we only have one place where we construct a simulator and it's been like that for a long time and no plans to change it
so it works for me.
If we switch devtools
throttling to use the observed throughput (as discussed in #13323) it may make more sense to do this overriding of throughput in load-simulator
and only have the single throughput
option, but we can think about that when/if that change comes up.
@@ -153,6 +153,7 @@ declare module Gatherer { | |||
interface Options { | |||
rtt?: number; | |||
throughput?: number; | |||
observedThroughput: number; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made this required so we don't have to set a default value.
/** | ||
* @param {number} wastedBytes | ||
*/ | ||
simulateTimespan(wastedBytes) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't love this name since 'timespan' is generally something else in a lighthouse context.
i still like the computeWastedMsWithWastedBytes
name tbh. how bout dat
non-reviewer lgtm! |
Followup to #13323
This will use the observed (real) throughput to calculate savings in timespan mode with desktop config (or any config with 0 throughput). Theoretically, any variance in the observed throughput would imply variance in performance for these cases.