Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving page load: Putting the # of requests vs page size debate to bed #856

paulirish opened this issue Dec 18, 2013 · 6 comments


Copy link

@paulirish paulirish commented Dec 18, 2013

Just a ticket to brainstorm this article idea first…


3 years ago I tried to get into this question: how many concatenated bytes were equivalent to a new network request?

Even that one is tough.

I want to see how we can educate people about latency and RTT overhead in the vocabulary that they made development/build decisions in.

One idea I think may be effective is taking a pretty hefty site and seeings its waterfall on cable/3g/4g to visualize the effect latency has. Then dramatically reduce request count and repeat.

Other ideas?

Copy link
Member Author

@paulirish paulirish commented Dec 18, 2013

@brianleroux wrote this awesome app to explain:

Copy link
Member Author

@paulirish paulirish commented Dec 19, 2013

Ideas from Tony Gentilcore:

We could run our top million site analysis under those simulated connections and then plot the # of bytes loaded to PLT/SpeedIndex as well as the # of requests to PLT/SpeedIndex.

I suspect we'd see strong correlations in both graphs, but that the # of requests is steeper than # of bytes.

BTW, the most engaging and friendly explanation of bandwidth vs latency that I've seen is John Rauser's old velocity talk:

Copy link

@icoloma icoloma commented Dec 19, 2013

For requests to the same server, the performance should vary wildly depending if SPDY/HTTP 2.0 is supported or not.

In theory ¬¬

Copy link
Member Author

@paulirish paulirish commented Jan 9, 2014

More from @igrigorik

I suspect we'd see strong correlations in both graphs, but that the # of requests is steeper than # of bytes.

I'm certain we would, but I don't think that would actually help us advance the conversation much - we'd just perpetuate the same confusion. Allow me to use (and overstretch) an analogy for "number of request vs. page size"... I want to be in good health. How many servings vs. pounds of food should I eat to optimize my diet? I heard that snacking is good, but what is the optimal snacking frequency/weight ratio?

I hope that sounds absurd, but that's exactly what the discussion focused on requests vs bytes is focused on:

  • not all requests are made equal: some are blocking / critical, some are more expensive depending on when they're made
  • not all nutrients are made equal: some release energy faster, some slower, some are better for you than others
  • not all bytes are made equal: the first 14KB are extremely important, bytes of different content-types have different impact on performance, etc.
  • ...

Measuring performance by the KB is like measuring effectiveness of your diet by the pound. Measuring performance by number of requests is like measuring your diet by number of things you ate - in both cases, who cares about what you actually ate, right?. Yes, if you run a large-scale analysis on these variables you'll find correlations... but making sweeping performance generalizations based on either one will inevitably miss the actual point and insights that matter.

(Now that I thoroughly destroyed that metaphor...)

If we want to advance the conversation, we need to talk about the critical path: what resources are needed to render the page, what are the bottlenecks, how to optimize, etc. Which is to say:

and a reply from Steve Souders:

HTTP Archive shows the factors that have the highest correlation to…

  • onLoad: Speed Index, Total Requests, Image Xfer Size, Image Requests, Total Xfer Size
  • StartRender : Speed Index, CSS Requests, Max Requests on 1 Domain, CSS Xfer Size, JS Xfer Size

I hadn't looked at these in a while. They've changed and make sense:

  • Yes, I agree that correlating any time measurement with Speed Index might be redundant, but it also feels like a confirmation that Speed Index is a good metric.
  • Ilya's point about the importance of critical resources is confirmed. Most people agree that StartRender is more important than onLoad. We see CSS & JS correlated to StartRender, whereas images and total size is more correlated with onLoad.

(Note that these tests are all done with IE9 via WebPagetest.)

Copy link

@mathiasbynens mathiasbynens commented Jan 9, 2014

My old post, _Inline <script> and <style> vs. external .js and .css — what’s the size threshold?_, has some insights from Zoompf and @getify. It also says this:

Guy Podjarny wrote _Why inlining everything is not always the answer_, in which he concludes that “[t]he HTTP overhead of a request & response is often ~1 KB, so files smaller than that should definitely be inlined” and also that “testing shows you should almost never inline files bigger than 4 KB”.

Copy link

@getify getify commented Jan 9, 2014

For posterity sake, the research I did into this 2-3 years ago, using a variety of sites I built/managed as small-scale test beds, was that the "magic threshold" for loading JS that was concatenated together was around 125kb (was quite a bit higher on mobile 300kb+ IIRC).

Meaning, if you had a single concatenated JS file (regardless of how many small constituent files were concatenated into it!) that you were loading during initial page-load, either dynamically or <script> wise, ending up over the 125kb mark, in general you might see some improvement in page-load times (as much as 5-10%) if you chunked the single 125kb file into two pieces, roughly equal in size, and loaded them in parallel.

I found this worked for a maximum of 2-3 chunks of the concatenated file, and only if all chunks were on the same host (no extra DNS lookup), and only if keep-alive was enabled on the server.

I tried, in vain, to get such a test on a larger scale, something like what the big internet players could afford to run, but the bandwidth to run thousands of tests (needed to reduce % of error in statistical variance) was way too cost prohibitive for me.

I still wish someone with a few dozen TB's of free bandwidth would conduct a larger scale test and see if they could duplicate any such results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants