-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Operation Yaquina Bay #2146
Comments
Update on "TTI(beta) / LoadFastEnough audits are costly (~5s) on larger traces": tl;dr remove old TTI (alpha) code before I/O will get us a few seconds, nothing can be done about TTFI in the short term since it's only ~20ms of computation after getting a trace model For firstInteractive, getting the trace model accounts for roughly 99.5% of the time spent (3110ms spent in getting trace model, 17ms spent filtering to long tasks and computing the result) so if we do away with that and compute durations slices ourselves then we can likely get a pretty big win. For old TTI, getting the trace model/running speedline only accounts for ~66% of the time spent (4738ms to get trace model/speedline since trace model should be cached by then, 2422ms computing) so there's likely more room for improvement but killing that entirely should save us that time anyway |
FWIW, 5s of wait time is mandatory based on the definitions of the metrics. Most of my CNN runs have been about ~6-8s so it's not as big a candidate for improvement as we might like. |
EIL only needs durations of the top-level tasks (plus start/end of first/last for clipping), so if we extract them ourselves it won't need a trace model either. |
👍 Yeah just updated, thought it used something else off the trace model but realized the percentile stuff doesn't actually use anything but the durations |
Holy crap, we've made some huge progress here. Full run for cnn.com
Full run for theverge.com
Full run for paulirish.com
Full run for example.com
"Today's run" includes #2220. 10d ago was 2d64961 nice work, everyone. @patrickhulce, you made quick work of the slowness! |
We'll call this a win. 🎉 🎉 🎉 🎉 |
Named after the Yaquina Bay lighthouse which, like a fast webpage that you can see it load quickly from start to finish, is short enough that it doesn't take long to see the whole thing:
![image](https://cloud.githubusercontent.com/assets/39191/25690902/b8770a26-304a-11e7-9c4e-9a5c6fb20ec4.png)
OYB is about making Lighthouse itself fast. I mean.. a performance tool should probably be performant. 😉
Goals:
The full start-to-finish LH run of cnn.com takes 3 minutes. Oh my. Waiting for load of the first pass takes 25 seconds, so that's the absolute fastest we can make the entire LH run take right now.
While cutting the time to 1/6th is a stretch we should at least cut it in half.
While profiling LH runtimes, we uncovered a few issues:
I think we can do a few selective fixes here to drive our total run duration down pretty easily.
➡️ View all OYB tickets: https://github.com/GoogleChrome/lighthouse/issues?utf8=%E2%9C%93&q=label%3AOYB
The text was updated successfully, but these errors were encountered: