Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operation Yaquina Bay #2146

Closed
paulirish opened this issue May 4, 2017 · 6 comments
Closed

Operation Yaquina Bay #2146

paulirish opened this issue May 4, 2017 · 6 comments
Labels

Comments

@paulirish
Copy link
Member

paulirish commented May 4, 2017

Named after the Yaquina Bay lighthouse which, like a fast webpage that you can see it load quickly from start to finish, is short enough that it doesn't take long to see the whole thing:
image


OYB is about making Lighthouse itself fast. I mean.. a performance tool should probably be performant. 😉

Goals:

  1. Make the default 4 category run of lighthouse faster
  2. Improve the run duration for users who want to run a single category

The full start-to-finish LH run of cnn.com takes 3 minutes. Oh my. Waiting for load of the first pass takes 25 seconds, so that's the absolute fastest we can make the entire LH run take right now.

While cutting the time to 1/6th is a stretch we should at least cut it in half.


While profiling LH runtimes, we uncovered a few issues:

  1. The disabled cache for the entire run (Disk cache is disabled for the whole run #2089) slows down passes 3 and 4.
  2. The Styles gatherer is generally taking >10s on heavy publisher sites
  3. Speedline is also going >10s on long pageloads
  4. Retrieving trace can take like 10s easily.
  5. TTI(beta) / LoadFastEnough audits are costly (~5s) on larger traces.

I think we can do a few selective fixes here to drive our total run duration down pretty easily.


➡️ View all OYB tickets: https://github.com/GoogleChrome/lighthouse/issues?utf8=%E2%9C%93&q=label%3AOYB

@patrickhulce
Copy link
Collaborator

patrickhulce commented May 4, 2017

Update on "TTI(beta) / LoadFastEnough audits are costly (~5s) on larger traces":

tl;dr remove old TTI (alpha) code before I/O will get us a few seconds, nothing can be done about TTFI in the short term since it's only ~20ms of computation after getting a trace model

For firstInteractive, getting the trace model accounts for roughly 99.5% of the time spent (3110ms spent in getting trace model, 17ms spent filtering to long tasks and computing the result) so if we do away with that and compute durations slices ourselves then we can likely get a pretty big win. However, we'd really just shift the cost to EIL since it also uses trace model. (EIL doesn't need anything more than what TTFI needs) This seems an unlikely dependency to remove before I/O and likely to introduce bugs if we maintain ourselves, but I might investigate further if time permits.

For old TTI, getting the trace model/running speedline only accounts for ~66% of the time spent (4738ms to get trace model/speedline since trace model should be cached by then, 2422ms computing) so there's likely more room for improvement but killing that entirely should save us that time anyway

@patrickhulce
Copy link
Collaborator

Retrieving trace can take like 10s easily.

FWIW, 5s of wait time is mandatory based on the definitions of the metrics. Most of my CNN runs have been about ~6-8s so it's not as big a candidate for improvement as we might like.

@brendankenny
Copy link
Member

we'd really just shift the cost to EIL since it also uses trace model

EIL only needs durations of the top-level tasks (plus start/end of first/last for clipping), so if we extract them ourselves it won't need a trace model either.

@patrickhulce
Copy link
Collaborator

EIL only needs durations of the top-level tasks (plus start/end of first/last for clipping), so if we extract them ourselves it won't need a trace model either.

👍 Yeah just updated, thought it used something else off the trace model but realized the percentile stuff doesn't actually use anything but the durations

@paulirish
Copy link
Member Author

paulirish commented May 11, 2017

Holy crap, we've made some huge progress here.

Full run for cnn.com

  • 10 days ago: 169s
  • Today: 50s (238% faster)

Full run for theverge.com

  • 10 days ago: 92s
  • Today: 53s (73% faster)

Full run for paulirish.com

  • 10 days ago: 30
  • Today: 26s (15% faster)

Full run for example.com

  • 10 days ago: 13s
  • Today: 12s (8% 😛 )

"Today's run" includes #2220. 10d ago was 2d64961
Measured from the CLI: time lighthouse <url>

nice work, everyone. @patrickhulce, you made quick work of the slowness!

cc @pavelfeldman

@paulirish
Copy link
Member Author

We'll call this a win.

🎉 🎉 🎉 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants