New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

script loading solution #28

Closed
paulirish opened this Issue Aug 10, 2010 · 132 comments

Comments

Projects
None yet
@paulirish
Member

paulirish commented Aug 10, 2010




# This issue thread is now closed. ## It was fun, but the conversations have moved elsewhere for now. Thanks ### In appreciation of the funtimes we had, @rmurphey made us a happy word cloud of the thread.

Enjoy.





via labjs or require.

my "boilerplate" load.js file has LABjs inlined in it, and then uses it to load jquery, GA, and one site js file. if it helps, I have an integrated RequireJS+jQuery in one file: http://bit.ly/dAiqEG ;)

also how does this play into the expectation of a build script that concatenates and minifies all script? should script loading be an option?

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Aug 10, 2010

Member

kyle: "@paul_irish i don't agree. http://bit.ly/9IfMMN cacheability (external CDN's), parallel downloading, script change-volatility..."

Member

paulirish commented Aug 10, 2010

kyle: "@paul_irish i don't agree. http://bit.ly/9IfMMN cacheability (external CDN's), parallel downloading, script change-volatility..."

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Aug 10, 2010

Member

james burke: "@paul_irish @fearphage @getify RequireJS has build tool to do script bundling/minifying, so can have best of both: dynamic and prebuilt"

Member

paulirish commented Aug 10, 2010

james burke: "@paul_irish @fearphage @getify RequireJS has build tool to do script bundling/minifying, so can have best of both: dynamic and prebuilt"

@3rd-Eden

This comment has been minimized.

Show comment
Hide comment
@3rd-Eden

3rd-Eden Nov 15, 2010

The easiest way for developers to get started with script loading would probably be using $Lab.js, because it's already using chaining syntax that allot of jQuery users are familiar with.

If they are building big enterprise apps they can always migrate to require.js if needed.

The easiest way for developers to get started with script loading would probably be using $Lab.js, because it's already using chaining syntax that allot of jQuery users are familiar with.

If they are building big enterprise apps they can always migrate to require.js if needed.

@shichuan

This comment has been minimized.

Show comment
Hide comment
@shichuan

shichuan Dec 17, 2010

Member

currently there are three main script loading techniques:

  1. HeadJS
  2. ControlJS
  3. LABjs

use it or not, which one to use is kinda debatable: http://blog.getify.com/2010/12/on-script-loaders/

Member

shichuan commented Dec 17, 2010

currently there are three main script loading techniques:

  1. HeadJS
  2. ControlJS
  3. LABjs

use it or not, which one to use is kinda debatable: http://blog.getify.com/2010/12/on-script-loaders/

@X4

This comment has been minimized.

Show comment
Hide comment
@X4

X4 Dec 24, 2010

There is also requireJS and EnhanceJS just to let you know the alternatives to HeadJS ControlJS and LabJS. Even Yahoo and google offer something similar.

X4 commented Dec 24, 2010

There is also requireJS and EnhanceJS just to let you know the alternatives to HeadJS ControlJS and LabJS. Even Yahoo and google offer something similar.

@scottwade

This comment has been minimized.

Show comment
Hide comment
@scottwade

scottwade Feb 22, 2011

With the release of jQuery 1.5 and deferreds -- http://www.erichynds.com/jquery/using-deferreds-in-jquery/ , Boris Moore's utilizing them in DeferJS, a new script loader project: https://github.com/BorisMoore/DeferJS

With the release of jQuery 1.5 and deferreds -- http://www.erichynds.com/jquery/using-deferreds-in-jquery/ , Boris Moore's utilizing them in DeferJS, a new script loader project: https://github.com/BorisMoore/DeferJS

@stereobooster

This comment has been minimized.

Show comment
Hide comment
@stereobooster

stereobooster Feb 23, 2011

By default script loading stops all other downloads, so downloading modernizr in the header is bad. Inlining loader make sense, because loaders can download script in parallel and in not blocking mode. For example if you do not need all modernizr features, you can inline head.min.js which is only 6kb or custom build of modernizr (http://modernizr.github.com/Modernizr/2.0-beta/). Inlining CSS sometimes make sense too. Google uses inlining, they inline css, js and empty 1x1 gifs through datauri.

By default script loading stops all other downloads, so downloading modernizr in the header is bad. Inlining loader make sense, because loaders can download script in parallel and in not blocking mode. For example if you do not need all modernizr features, you can inline head.min.js which is only 6kb or custom build of modernizr (http://modernizr.github.com/Modernizr/2.0-beta/). Inlining CSS sometimes make sense too. Google uses inlining, they inline css, js and empty 1x1 gifs through datauri.

@peterbraden

This comment has been minimized.

Show comment
Hide comment
@peterbraden

peterbraden Feb 23, 2011

LabJS is becoming pretty widely used and is a good solution - also it can be included asynchronously so doesn't need to block.

http://blog.getify.com/2010/12/on-script-loaders/ is by the author

LabJS is becoming pretty widely used and is a good solution - also it can be included asynchronously so doesn't need to block.

http://blog.getify.com/2010/12/on-script-loaders/ is by the author

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Mar 6, 2011

Member

http://yepnopejs.com/ just went 1.0 and doesn't break in new webkit, unlike LAB and head.js. Script loading is hard.

yepnope is also integrated into Modernizr as Modernizr.load.. http://modernizr.github.com/Modernizr/2.0-beta/

So we'll probably have a script loader in h5bp by way of Modernizr.load pretty soon.

I don't think it'll make 1.0 but once i take Modernizr up to 1.8 we'll toss that into h5bp 1.1. Yeeeah

Member

paulirish commented Mar 6, 2011

http://yepnopejs.com/ just went 1.0 and doesn't break in new webkit, unlike LAB and head.js. Script loading is hard.

yepnope is also integrated into Modernizr as Modernizr.load.. http://modernizr.github.com/Modernizr/2.0-beta/

So we'll probably have a script loader in h5bp by way of Modernizr.load pretty soon.

I don't think it'll make 1.0 but once i take Modernizr up to 1.8 we'll toss that into h5bp 1.1. Yeeeah

@hokapoka

This comment has been minimized.

Show comment
Hide comment
@hokapoka

hokapoka Mar 13, 2011

Hi Paul

I've porting an existing site to use your H5BP and I want to use the yepnope.js script loader. It's really nice to see it all the bits and bots put together as you have done.

What would you recommend using at the moment?

  1. Include yepnope.js along with modernizer.js at the top of the page
  2. Include it at the bottom of the page, to load after the HTML has finished loading.
  3. Use the beta version of modernizer.js
  4. I could concatenate yepnope.js with modernizer.js into one include.

Regardless of how best to include it, how do you recommend loading the scripts with yepnope,js?

I figure we should be doing it around here : https://github.com/paulirish/html5-boilerplate/blob/master/index.html#L52 and use yepnope to load the CDN / Local copy of jQuery and our other scripts.

But, do you think it's best to use an external script include or render a script block within the html, which then loads the scripts via yepnope.js?

Many thanks.

Andy

Hi Paul

I've porting an existing site to use your H5BP and I want to use the yepnope.js script loader. It's really nice to see it all the bits and bots put together as you have done.

What would you recommend using at the moment?

  1. Include yepnope.js along with modernizer.js at the top of the page
  2. Include it at the bottom of the page, to load after the HTML has finished loading.
  3. Use the beta version of modernizer.js
  4. I could concatenate yepnope.js with modernizer.js into one include.

Regardless of how best to include it, how do you recommend loading the scripts with yepnope,js?

I figure we should be doing it around here : https://github.com/paulirish/html5-boilerplate/blob/master/index.html#L52 and use yepnope to load the CDN / Local copy of jQuery and our other scripts.

But, do you think it's best to use an external script include or render a script block within the html, which then loads the scripts via yepnope.js?

Many thanks.

Andy

@hokapoka

This comment has been minimized.

Show comment
Hide comment
@hokapoka

hokapoka Mar 13, 2011

Oh and another thing.

As yepnope can load css via, I would say it's best to include the main css as you would normally and use yepnope to only include css for specific fixes.

For example including some css that is only applied to older versions of IE.

Oh and another thing.

As yepnope can load css via, I would say it's best to include the main css as you would normally and use yepnope to only include css for specific fixes.

For example including some css that is only applied to older versions of IE.

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Mar 25, 2011

Member

hokapoka,

Use the beta version of modernizr.. just include what you need (and include Modernizr.load()) and then put that at the top of the page.

the actual code for the jquery fallback with yepnope is on http://yepnopejs.com/

And yes i like your idea of the conditional load of IE css.

Member

paulirish commented Mar 25, 2011

hokapoka,

Use the beta version of modernizr.. just include what you need (and include Modernizr.load()) and then put that at the top of the page.

the actual code for the jquery fallback with yepnope is on http://yepnopejs.com/

And yes i like your idea of the conditional load of IE css.

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Aug 9, 2011

Member

tbh there is too much blind faith around script loaders wrt performance and i dont think we're ready to say THIS IS THE RIGHT WAY.

we need more research around filesizes, bandwidth and network conditions that indicate smart recommendations on script loading but right now the field is nascent and we'd be naive to recommend a blanket solution of script loading.

so.

closing this ticket and asking anyone who cares to do the comprehensive research and publishing required to make it easier for developers to make a smart choice about this one

Member

paulirish commented Aug 9, 2011

tbh there is too much blind faith around script loaders wrt performance and i dont think we're ready to say THIS IS THE RIGHT WAY.

we need more research around filesizes, bandwidth and network conditions that indicate smart recommendations on script loading but right now the field is nascent and we'd be naive to recommend a blanket solution of script loading.

so.

closing this ticket and asking anyone who cares to do the comprehensive research and publishing required to make it easier for developers to make a smart choice about this one

@paulirish paulirish closed this Aug 9, 2011

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 9, 2011

i have done quite a bit of research about concat vs. parallel load. i still, without reservation, make the recommendation to combine all js into one file first, then chunk it up into 2-3 ~equal sized chunks, and load those in parallel.

I'd love to be able to take my research and make it wide spread and to scale, so that it was viable as "fact" in this area. The problem is I've tried and tried to find hosting bandwidth where it won't cost me lots of $$ to actually run the tests at scale, and have failed to find that hosting provision yet.

If I/we can solve the bandwidth issue for testing, I have the tests that can be run to find out if the theory of parallel loading is in fact viable (as I believe it is).

getify commented Aug 9, 2011

i have done quite a bit of research about concat vs. parallel load. i still, without reservation, make the recommendation to combine all js into one file first, then chunk it up into 2-3 ~equal sized chunks, and load those in parallel.

I'd love to be able to take my research and make it wide spread and to scale, so that it was viable as "fact" in this area. The problem is I've tried and tried to find hosting bandwidth where it won't cost me lots of $$ to actually run the tests at scale, and have failed to find that hosting provision yet.

If I/we can solve the bandwidth issue for testing, I have the tests that can be run to find out if the theory of parallel loading is in fact viable (as I believe it is).

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Aug 9, 2011

Member

@getify what do you need as far as a testing rig?

Member

paulirish commented Aug 9, 2011

@getify what do you need as far as a testing rig?

@SlexAxton

This comment has been minimized.

Show comment
Hide comment
@SlexAxton

SlexAxton Aug 9, 2011

Contributor

I can do about 1.5TB more data out of my personal server than I'm currently using. I have Nginx installed and that can handle somewhere around 4 trillion quadrillion hits per microsecond. I don't feel like the technology is the barrier here.

If we're worried about locations, we can spoof higher latency, and/or find a couple other people with a little extra room on their boxes.

Contributor

SlexAxton commented Aug 9, 2011

I can do about 1.5TB more data out of my personal server than I'm currently using. I have Nginx installed and that can handle somewhere around 4 trillion quadrillion hits per microsecond. I don't feel like the technology is the barrier here.

If we're worried about locations, we can spoof higher latency, and/or find a couple other people with a little extra room on their boxes.

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 9, 2011

BTW, I take a little bit of issue with "blind faith".

It is easy, provable, and almost without question true that if you have an existing site loading many scripts with script-tags, using a parallel script loader (with no other changes) improves performance. This is true because even the newest browsers cannot (and never will, I don't think) unpin script loading from blocking DOM-ready. So even in best case browser loading, if there's no other benefit, drastically speeding up DOM-ready on a site is pretty much always a win (for users and UX).

Your statement is a little bit of a false premise because it assumes that we're trying to compare, for every site, parallel-loading to script-concat. Most sites on the web don't/can't actually use script-concat, so really the comparison (for them, the majority) is not quite as nuanced and complicated as you assume. If they don't/can't use script-concat (for whatever reason), the comparison is simple: parallel-loading is almost always a win over script tags.

If they are open to script-concat (or already use it), then yes, it does get a bit more nuanced/complicated to decide if parallel-loading could help or not. But script-concat is not a one-size-fits-all silver bullet solution either, so there's plenty of sites for whom parallel-loading will remain the preferred and best approach.

Just because some sites deal with the nuances/complexities of deciding between parallel-loading vs. script-concat doesn't mean that the greater (more impactful) discussion of parallel-loading vs. script tags should be lost in the mix. The former is hard to prove, but the latter is almost a given at this point.


All this is to say that, all things considered, IMHO a boilerplate should be encouraging a pattern which has the biggest impact in a positive direction. If 80% of sites on the internet today use script tags, most of which would benefit from moving from script tags to parallel-loading, then parallel-loading is a very healthy thing to suggest as a starting point for the boilerplate.

It's a much smaller (but important) subsection of those sites which can potentially get even more benefit from exploring script-concat vs. parallel-loading. But a minority use-case isn't what should be optimized for in a boilerplate.

Just my few cents.

getify commented Aug 9, 2011

BTW, I take a little bit of issue with "blind faith".

It is easy, provable, and almost without question true that if you have an existing site loading many scripts with script-tags, using a parallel script loader (with no other changes) improves performance. This is true because even the newest browsers cannot (and never will, I don't think) unpin script loading from blocking DOM-ready. So even in best case browser loading, if there's no other benefit, drastically speeding up DOM-ready on a site is pretty much always a win (for users and UX).

Your statement is a little bit of a false premise because it assumes that we're trying to compare, for every site, parallel-loading to script-concat. Most sites on the web don't/can't actually use script-concat, so really the comparison (for them, the majority) is not quite as nuanced and complicated as you assume. If they don't/can't use script-concat (for whatever reason), the comparison is simple: parallel-loading is almost always a win over script tags.

If they are open to script-concat (or already use it), then yes, it does get a bit more nuanced/complicated to decide if parallel-loading could help or not. But script-concat is not a one-size-fits-all silver bullet solution either, so there's plenty of sites for whom parallel-loading will remain the preferred and best approach.

Just because some sites deal with the nuances/complexities of deciding between parallel-loading vs. script-concat doesn't mean that the greater (more impactful) discussion of parallel-loading vs. script tags should be lost in the mix. The former is hard to prove, but the latter is almost a given at this point.


All this is to say that, all things considered, IMHO a boilerplate should be encouraging a pattern which has the biggest impact in a positive direction. If 80% of sites on the internet today use script tags, most of which would benefit from moving from script tags to parallel-loading, then parallel-loading is a very healthy thing to suggest as a starting point for the boilerplate.

It's a much smaller (but important) subsection of those sites which can potentially get even more benefit from exploring script-concat vs. parallel-loading. But a minority use-case isn't what should be optimized for in a boilerplate.

Just my few cents.

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 9, 2011

@paulirish @SlexAxton --

As far as bandwidth needs, I estimated that to get 10,000 people (what I felt was needed to be an accurate sampling) to run the test once (and many people would run it several times, I'm sure), it would be about 200GB of bandwidth spent. For some people, that's a drop in the bucket. For me, 200GB of bandwidth in a few days time would be overwhelming to my server hosting costs. So, I haven't pursued scaling the tests on that reason alone.

Moreover, I have more than a dozen variations of this test that I think we need to explore. So, dozens of times of using 100-200GB of bandwidth each would be quite cost prohibitive for me to foot the bill on. I didn't want to start down that road unless I was sure that I had enough bandwidth to finish the task.

They're just static files, and the tests don't require lots of concurrent users, so there's no real concerns about traditional scaling issues like CPU, etc. Just bandwidth, that's all.

We can take the rest of the discussion of the tests offline and pursue it over email or IM. I would very much like to finally scale the tests and "settle" this issue. It's been hanging around the back of my brain for the better part of a year now.

getify commented Aug 9, 2011

@paulirish @SlexAxton --

As far as bandwidth needs, I estimated that to get 10,000 people (what I felt was needed to be an accurate sampling) to run the test once (and many people would run it several times, I'm sure), it would be about 200GB of bandwidth spent. For some people, that's a drop in the bucket. For me, 200GB of bandwidth in a few days time would be overwhelming to my server hosting costs. So, I haven't pursued scaling the tests on that reason alone.

Moreover, I have more than a dozen variations of this test that I think we need to explore. So, dozens of times of using 100-200GB of bandwidth each would be quite cost prohibitive for me to foot the bill on. I didn't want to start down that road unless I was sure that I had enough bandwidth to finish the task.

They're just static files, and the tests don't require lots of concurrent users, so there's no real concerns about traditional scaling issues like CPU, etc. Just bandwidth, that's all.

We can take the rest of the discussion of the tests offline and pursue it over email or IM. I would very much like to finally scale the tests and "settle" this issue. It's been hanging around the back of my brain for the better part of a year now.

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Aug 9, 2011

Member

I can do unlimited TB on my dreamhost VPS so this won't be a problem. right now i'm doing 72gb/day and can handle way more. :)

Member

paulirish commented Aug 9, 2011

I can do unlimited TB on my dreamhost VPS so this won't be a problem. right now i'm doing 72gb/day and can handle way more. :)

@SlexAxton

This comment has been minimized.

Show comment
Hide comment
@SlexAxton

SlexAxton Aug 9, 2011

Contributor

I agree with paul, and think there is quite a bit of misinformation about how and when script-loaders are going to be of any benefit to anyone.

Your first paragraph says it's 'easy', 'provable' and 'without question' that script loaders improve performance.

I made a similar postulation to @jashkenas a while back, and he and I put together some identical pages as best we could to try and measure performance of our best techniques. He's a fan of 100% concat, and I tried 2 different script loading techniques.

https://github.com/SlexAxton/AssetRace

The code is all there. Obviously there wasn't a huge testing audience, but the results at best showed that this script-loader was about the same speed as the concat method (with your similar sized 3 file parallel load guidelines followed), and at worst showed that script-loaders varied much more and were generally slower within a margin of error. Feel free to fork and find a solution that beats on or both of ours, even if it's just on your machine in one browser.

As for the "false premise" because h5bp assumes that people concat their js. This argument is entirely invalid because h5bp offers a script build tool, complete with concat and minification. So the argument that parallel-loading is almost always a win over multiple script tags may be true, but it's not better than what h5bp offers currently. That is the context of this discussion.

I think the worst case scenario are people taking something like yepnope or lab.js and using it as a script tag polyfill. That's absolutely going to result in slower loading (of their 19 JS and 34 CSS files), as well as introduce a slew of backwards and forwards compatibility issues that they'll be completely unaware of.

I think in the spirit of giving people the most sensible and performant and compatible default for a boilerplate, a build tool goes a lot further to ensure all three.

Contributor

SlexAxton commented Aug 9, 2011

I agree with paul, and think there is quite a bit of misinformation about how and when script-loaders are going to be of any benefit to anyone.

Your first paragraph says it's 'easy', 'provable' and 'without question' that script loaders improve performance.

I made a similar postulation to @jashkenas a while back, and he and I put together some identical pages as best we could to try and measure performance of our best techniques. He's a fan of 100% concat, and I tried 2 different script loading techniques.

https://github.com/SlexAxton/AssetRace

The code is all there. Obviously there wasn't a huge testing audience, but the results at best showed that this script-loader was about the same speed as the concat method (with your similar sized 3 file parallel load guidelines followed), and at worst showed that script-loaders varied much more and were generally slower within a margin of error. Feel free to fork and find a solution that beats on or both of ours, even if it's just on your machine in one browser.

As for the "false premise" because h5bp assumes that people concat their js. This argument is entirely invalid because h5bp offers a script build tool, complete with concat and minification. So the argument that parallel-loading is almost always a win over multiple script tags may be true, but it's not better than what h5bp offers currently. That is the context of this discussion.

I think the worst case scenario are people taking something like yepnope or lab.js and using it as a script tag polyfill. That's absolutely going to result in slower loading (of their 19 JS and 34 CSS files), as well as introduce a slew of backwards and forwards compatibility issues that they'll be completely unaware of.

I think in the spirit of giving people the most sensible and performant and compatible default for a boilerplate, a build tool goes a lot further to ensure all three.

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 9, 2011

@SlexAxton

... the results at best showed that this script-loader was about the same speed as the concat method (with your similar sized 3 file parallel load guidelines followed)...

I'll happily find some time to take a look at the tests you put together. I'm sure you guys know what you're doing so I'm sure your tests are valid and correct.

OTOH, I have lots of contradictory evidence. If I had ever seen anything compelling to suggest that parallel script loading was a waste or unhelpful to the majority of sites, I would have long ago abandoned the crazy time sink that is LABjs.

I can say with 100% certainty that I have never, in 2 years of helping put LABjs out ther for people, found a situation where LABjs was slower than the script tag alternative. Zero times has that ever occured to me. There've been a few times that people said they didn't see much benefit. There've been a few times where people were loading 100+ files and so the crazy overhead of that many connections wiped out any benefits they might have otherwise seen. But I've never once had someone tell me that LABjs made their site slower.

I have literally myself helped 50+ different sites move from script tags to LABjs, and without fail sites saw performance improvements right off the bat. Early on in the efforts, I took a sampling of maybe 7 or 8 sites that I had helped, and they had collectively seen an average of about 15% improvement in loading speed. For the 4 or 5 sites that I manage, I of course implemented LABjs, and immediately saw as much as 3x loading speed.

Of course, when LABjs was first put out there, it was state-of-the-art for browsers to load scripts in parallel (only a few were doing that). So the gains were huge and visible then. Now, we have almost all browsers doing parallel loading, so the gains aren't so drastic anymore.

But the one thing that is undeniable is that browsers all block the DOM-ready event for loading of script tags. They have to because of the possibility of finding document.write(). Parallel script loading is essentially saying "browser, i promise you won't have to deal with document.write, so go ahead and move forward with the page".

Take a look at the two diagrams on slide 10 of this deck:

http://www.slideshare.net/shadedecho/the-once-and-future-script-loader-v2

Compare the placement of the blue line (DOM-ready). That's a drastic improvement in perceived performance (UX), even if overall page-load time (or time to finish all assets loading) isn't any better.

...h5bp offers a script build tool...

The faulty assumption here is that just because h5bp offers this tool, that all (or even most) users of h5bp can use it. Even if 100% of the users of h5bp do use it, that doesn't mean that if h5bp were rolled out to the long-tail of the internet, that all of them would use that concat tool. There are a bunch of other factors that can easily prevent someone from using that. There are very few reasons why someone can't move from using script tags to using a parallel script loader.

As such, parallel script loading still offers a broader appeal to the long-tail of the internet. It still is easier for the majority of sites that do not use script loading optimizations, to move from nothing to something, and that something offers them performance wins. Few of those long-tail sites will ever spend the effort on (or have the skill to experiement with) automated script build tools in their cheap $6/mo, mass shared hosting, non-CDN'd web hosting environments.

I think the worst case scenario are people taking something like yepnope or lab.js and using it as a script tag polyfill. That's absolutely going to result in slower loading...

I could not disagree with this statement more. LABjs is specifically designed as a script tag polyfill. And the improvements of LABjs over regular script tags (ignore script concat for the time being) are well established and have never been seriously refuted. If you have proof that most (or even a lot of) sites out there using LABjs would be better off going back to script tags, please do share.

There is absolutely no reason why parallel script loading is going to result in slower loading than what the browser could accomplish with script tags. That makes no sense. And as I established above, script tags will always block DOM-ready, where parallel script loading will not.

introduce a slew of backwards and forwards compatibility issues that they'll be completely unaware of.

What compatibility issues are you talking about? LABjs' browser support matrix has absolutely the vast majority of every web browser on the planet covered. The crazy small sliver of browsers it breaks in is far outweighed by the large number of browsers it has clear benefits in.

LABjs 1.x had a bunch of crazy hacks in it, like cache-preloading, which indeed were major concerns for breakage with browsers. LABjs 2.x has flipped that completely upside down, and now uses reliable and standardized approaches for parallel loading in all cases, only falling back to the hack for the older webkit browser. In addition, LABjs 2.x already has checks in it for feature-tests of coming-soon script loading techniques (hopefully soon to be standardized) like "real preloading".

I can't speak definitively for any other script loaders -- I know many still use hacks -- but as for LABjs, I'm bewildered by the claim that it introduces forward or backward compatibility issues, as I think this is patently a misleading claim.

getify commented Aug 9, 2011

@SlexAxton

... the results at best showed that this script-loader was about the same speed as the concat method (with your similar sized 3 file parallel load guidelines followed)...

I'll happily find some time to take a look at the tests you put together. I'm sure you guys know what you're doing so I'm sure your tests are valid and correct.

OTOH, I have lots of contradictory evidence. If I had ever seen anything compelling to suggest that parallel script loading was a waste or unhelpful to the majority of sites, I would have long ago abandoned the crazy time sink that is LABjs.

I can say with 100% certainty that I have never, in 2 years of helping put LABjs out ther for people, found a situation where LABjs was slower than the script tag alternative. Zero times has that ever occured to me. There've been a few times that people said they didn't see much benefit. There've been a few times where people were loading 100+ files and so the crazy overhead of that many connections wiped out any benefits they might have otherwise seen. But I've never once had someone tell me that LABjs made their site slower.

I have literally myself helped 50+ different sites move from script tags to LABjs, and without fail sites saw performance improvements right off the bat. Early on in the efforts, I took a sampling of maybe 7 or 8 sites that I had helped, and they had collectively seen an average of about 15% improvement in loading speed. For the 4 or 5 sites that I manage, I of course implemented LABjs, and immediately saw as much as 3x loading speed.

Of course, when LABjs was first put out there, it was state-of-the-art for browsers to load scripts in parallel (only a few were doing that). So the gains were huge and visible then. Now, we have almost all browsers doing parallel loading, so the gains aren't so drastic anymore.

But the one thing that is undeniable is that browsers all block the DOM-ready event for loading of script tags. They have to because of the possibility of finding document.write(). Parallel script loading is essentially saying "browser, i promise you won't have to deal with document.write, so go ahead and move forward with the page".

Take a look at the two diagrams on slide 10 of this deck:

http://www.slideshare.net/shadedecho/the-once-and-future-script-loader-v2

Compare the placement of the blue line (DOM-ready). That's a drastic improvement in perceived performance (UX), even if overall page-load time (or time to finish all assets loading) isn't any better.

...h5bp offers a script build tool...

The faulty assumption here is that just because h5bp offers this tool, that all (or even most) users of h5bp can use it. Even if 100% of the users of h5bp do use it, that doesn't mean that if h5bp were rolled out to the long-tail of the internet, that all of them would use that concat tool. There are a bunch of other factors that can easily prevent someone from using that. There are very few reasons why someone can't move from using script tags to using a parallel script loader.

As such, parallel script loading still offers a broader appeal to the long-tail of the internet. It still is easier for the majority of sites that do not use script loading optimizations, to move from nothing to something, and that something offers them performance wins. Few of those long-tail sites will ever spend the effort on (or have the skill to experiement with) automated script build tools in their cheap $6/mo, mass shared hosting, non-CDN'd web hosting environments.

I think the worst case scenario are people taking something like yepnope or lab.js and using it as a script tag polyfill. That's absolutely going to result in slower loading...

I could not disagree with this statement more. LABjs is specifically designed as a script tag polyfill. And the improvements of LABjs over regular script tags (ignore script concat for the time being) are well established and have never been seriously refuted. If you have proof that most (or even a lot of) sites out there using LABjs would be better off going back to script tags, please do share.

There is absolutely no reason why parallel script loading is going to result in slower loading than what the browser could accomplish with script tags. That makes no sense. And as I established above, script tags will always block DOM-ready, where parallel script loading will not.

introduce a slew of backwards and forwards compatibility issues that they'll be completely unaware of.

What compatibility issues are you talking about? LABjs' browser support matrix has absolutely the vast majority of every web browser on the planet covered. The crazy small sliver of browsers it breaks in is far outweighed by the large number of browsers it has clear benefits in.

LABjs 1.x had a bunch of crazy hacks in it, like cache-preloading, which indeed were major concerns for breakage with browsers. LABjs 2.x has flipped that completely upside down, and now uses reliable and standardized approaches for parallel loading in all cases, only falling back to the hack for the older webkit browser. In addition, LABjs 2.x already has checks in it for feature-tests of coming-soon script loading techniques (hopefully soon to be standardized) like "real preloading".

I can't speak definitively for any other script loaders -- I know many still use hacks -- but as for LABjs, I'm bewildered by the claim that it introduces forward or backward compatibility issues, as I think this is patently a misleading claim.

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 9, 2011

to elaborate slightly on why i intend for LABjs to in fact be a script tag polyfill...

  1. older browsers clearly are WAY inferior at handling script tags loading that parallel loading can handle. it was in those "older browsers" (which were the latest/best when LABjs launched 2 years ago) that we saw the ~3x page-load time improvements. almost by definition, that makes LABjs a better script tag polyfill, since it brings a feature (ie, performance of parallel loading) to browsers which don't support it themselves.
  2. newer browsers are obviously a lot better. but they haven't completely obviated the benefits of script loaders. chrome as recently as v12 (i guess they finally fixed in v13 it seems) was still blocking image loads while script tags finished loading. even with the latest from IE, Firefox and Chrome, they all still block DOM-ready while scripts are dynamically loading, because they all still have to pessmisitically assume that document.write() may be lurking.

So, for the newer browsers, LABjs is a "polyfill" in the sense that it's bringing "non-DOM-ready-blocking script loading" to the browser in a way that script tags cannot do. The only possible way you could approach doing that in modern browsers without a parallel script loader would be to use script tags with defer (async obviously won't work since it doesn't preserve order). However, defer has a number of quirks to it, and its support is not widespread enough to be a viable solution (the fallback for non-defer is bad performance). So you could say that, in the very most basic case, LABjs is a polyfill for the performance characteristics of script tag defer (although not exactly).

getify commented Aug 9, 2011

to elaborate slightly on why i intend for LABjs to in fact be a script tag polyfill...

  1. older browsers clearly are WAY inferior at handling script tags loading that parallel loading can handle. it was in those "older browsers" (which were the latest/best when LABjs launched 2 years ago) that we saw the ~3x page-load time improvements. almost by definition, that makes LABjs a better script tag polyfill, since it brings a feature (ie, performance of parallel loading) to browsers which don't support it themselves.
  2. newer browsers are obviously a lot better. but they haven't completely obviated the benefits of script loaders. chrome as recently as v12 (i guess they finally fixed in v13 it seems) was still blocking image loads while script tags finished loading. even with the latest from IE, Firefox and Chrome, they all still block DOM-ready while scripts are dynamically loading, because they all still have to pessmisitically assume that document.write() may be lurking.

So, for the newer browsers, LABjs is a "polyfill" in the sense that it's bringing "non-DOM-ready-blocking script loading" to the browser in a way that script tags cannot do. The only possible way you could approach doing that in modern browsers without a parallel script loader would be to use script tags with defer (async obviously won't work since it doesn't preserve order). However, defer has a number of quirks to it, and its support is not widespread enough to be a viable solution (the fallback for non-defer is bad performance). So you could say that, in the very most basic case, LABjs is a polyfill for the performance characteristics of script tag defer (although not exactly).

@jaubourg

This comment has been minimized.

Show comment
Hide comment
@jaubourg

jaubourg Aug 9, 2011

Honestly, I still think we should petition standards for a script loading object. Having to create a script tag of a different type than text/javascript to trigger the cache (or worse, use an object tag or an image object or whatever a new version of a popular browser will require) is jumping a lot of hoops for nothing and performance will vary depending of too much variables. I can understand we still load stylesheets using dom node insertion (but that's only because of order) but when it comes to script, I think it doesn't make sense at all anymore (I wish google would stop using document.write in most of their scripts but that's another story entirely).

Also, I think we're missing the biggest point regarding script loaders here: to be able to load js code on-demand rather than load everything up-front (even with everything in cache, parsing and initializing takes time and it can get pretty ugly with a non-trivial ammount of concatenated scripts). Having some wait-time after a UI interaction is much less of a problem than having the browser "hang" even a little at start-up (DOM may be ready all-right, but what good is it if the code to enhance the page and add iteraction hasn't been executed yet: ever noticed how some sites load immediately then something clunky occurs?).

So strict performance measurement is all fine and dandy, but I still think perceived performance is the ultimate goal... and is sadly far less easy to estimate/optimize/compute.

jaubourg commented Aug 9, 2011

Honestly, I still think we should petition standards for a script loading object. Having to create a script tag of a different type than text/javascript to trigger the cache (or worse, use an object tag or an image object or whatever a new version of a popular browser will require) is jumping a lot of hoops for nothing and performance will vary depending of too much variables. I can understand we still load stylesheets using dom node insertion (but that's only because of order) but when it comes to script, I think it doesn't make sense at all anymore (I wish google would stop using document.write in most of their scripts but that's another story entirely).

Also, I think we're missing the biggest point regarding script loaders here: to be able to load js code on-demand rather than load everything up-front (even with everything in cache, parsing and initializing takes time and it can get pretty ugly with a non-trivial ammount of concatenated scripts). Having some wait-time after a UI interaction is much less of a problem than having the browser "hang" even a little at start-up (DOM may be ready all-right, but what good is it if the code to enhance the page and add iteraction hasn't been executed yet: ever noticed how some sites load immediately then something clunky occurs?).

So strict performance measurement is all fine and dandy, but I still think perceived performance is the ultimate goal... and is sadly far less easy to estimate/optimize/compute.

@masondesu

This comment has been minimized.

Show comment
Hide comment
@masondesu

masondesu Aug 9, 2011

This is intense.

This is intense.

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 9, 2011

@jaubourg--

Honestly, I still think we should petition standards for a script loading object.

There is much petitioning going on regarding how the standards/specs and browsers can give us better script loading tech. First big win in this category in years was the "ordered async" (async=false) that was adopted back in Feb and is now in every major current-release browser (exception: Opera coming very soon, and IE10p2 has it).

The next debate, which I'm currently in on-going discussions with Ian Hickson about, is what I call "real preloading". In my opinion, "real preloading" (which IE already supports since v4, btw) would be the nearest thing to a "silver bullet" that would solve nearly all script loading scenarios rather trivially. I am still quite optimistic that we'll see something like this standardized.

See this wiki for more info: http://wiki.whatwg.org/wiki/Script_Execution_Control

Having to create a script tag of a different type than text/javascript to trigger the cache (or worse, use an object tag or an image object or whatever a new version of a popular browser will require)

This is called "cache preloading", and it's an admitted ugly and horrible hack. LABjs way de-emphasizes this now as of v2 (only uses it as a fallback for older webkit). Other script loaders unfortunately still use it as their primary loading mechanism. But 90% of the need for "cache preloading" can be solved with "ordered async", which is standardized and isn't a hack, so well-behaved script loaders should be preferring that over "cache preloading" now.

So, I agree that "cache preloading" sucks, but there's much better ways to use document.createElement("script") which don't involve such hacks, so I disagree that this is an argument against continuing to rely on the browser Script element for script loading. If we can get "real preloading", the Script element will be everything we need it to be. I honestly believe that.

I think we're missing the biggest point regarding script loaders here: to be able to load js code on-demand

Very much agree that's an important benefit that script loaders bring. But it's sort of a moot argument in this thread, because the "script concat" folks simply cannot, without script loading, solve the use-case, so it makes no sense to "compare" the two. You can say as a "script concat" proponent "fine, we don't care about that use case", but you can't say "we can serve that use-case better using XYZ".

Perceived performance is huge and important, I agree. On-demand loading is a huge part of making that happen. On-demand loading will also improve real actual performance (not just perception) because it tends to lead to less actually being downloaded if you only download what's needed (few page visits require 100% of the code you've written).

Perceived performance is also why I advocate the DOM-ready argument above. Because how quickly a user "feels" like they can interact with a page is very important to how quick they think the page is (regardless of how fast it really loaded). That's a fact established by lots of user research.

getify commented Aug 9, 2011

@jaubourg--

Honestly, I still think we should petition standards for a script loading object.

There is much petitioning going on regarding how the standards/specs and browsers can give us better script loading tech. First big win in this category in years was the "ordered async" (async=false) that was adopted back in Feb and is now in every major current-release browser (exception: Opera coming very soon, and IE10p2 has it).

The next debate, which I'm currently in on-going discussions with Ian Hickson about, is what I call "real preloading". In my opinion, "real preloading" (which IE already supports since v4, btw) would be the nearest thing to a "silver bullet" that would solve nearly all script loading scenarios rather trivially. I am still quite optimistic that we'll see something like this standardized.

See this wiki for more info: http://wiki.whatwg.org/wiki/Script_Execution_Control

Having to create a script tag of a different type than text/javascript to trigger the cache (or worse, use an object tag or an image object or whatever a new version of a popular browser will require)

This is called "cache preloading", and it's an admitted ugly and horrible hack. LABjs way de-emphasizes this now as of v2 (only uses it as a fallback for older webkit). Other script loaders unfortunately still use it as their primary loading mechanism. But 90% of the need for "cache preloading" can be solved with "ordered async", which is standardized and isn't a hack, so well-behaved script loaders should be preferring that over "cache preloading" now.

So, I agree that "cache preloading" sucks, but there's much better ways to use document.createElement("script") which don't involve such hacks, so I disagree that this is an argument against continuing to rely on the browser Script element for script loading. If we can get "real preloading", the Script element will be everything we need it to be. I honestly believe that.

I think we're missing the biggest point regarding script loaders here: to be able to load js code on-demand

Very much agree that's an important benefit that script loaders bring. But it's sort of a moot argument in this thread, because the "script concat" folks simply cannot, without script loading, solve the use-case, so it makes no sense to "compare" the two. You can say as a "script concat" proponent "fine, we don't care about that use case", but you can't say "we can serve that use-case better using XYZ".

Perceived performance is huge and important, I agree. On-demand loading is a huge part of making that happen. On-demand loading will also improve real actual performance (not just perception) because it tends to lead to less actually being downloaded if you only download what's needed (few page visits require 100% of the code you've written).

Perceived performance is also why I advocate the DOM-ready argument above. Because how quickly a user "feels" like they can interact with a page is very important to how quick they think the page is (regardless of how fast it really loaded). That's a fact established by lots of user research.

@aaronpeters

This comment has been minimized.

Show comment
Hide comment
@aaronpeters

aaronpeters Aug 9, 2011

Gotta love the passionate, long comments by @getify
Kyle ...

If I can contribute in any way to the research, I would love to.
Bandwidth (costs) doesn't seem to be the problem, so @getify, what do you propose on moving forward?
Do not hesitate to contact me via email (aaron [at] aaronpeters [dot] or twitter (@aaronpeters)

Gotta love the passionate, long comments by @getify
Kyle ...

If I can contribute in any way to the research, I would love to.
Bandwidth (costs) doesn't seem to be the problem, so @getify, what do you propose on moving forward?
Do not hesitate to contact me via email (aaron [at] aaronpeters [dot] or twitter (@aaronpeters)

@jaubourg

This comment has been minimized.

Show comment
Hide comment
@jaubourg

jaubourg Aug 9, 2011

@kyle

Yep, I followed the script tag "enhancements" discussion regarding preloading and I just don't buy the "add yet another attribute on the script tag" approach as a viable approach. I've seen what it did to the xhr spec: a lot of complexity in regard to the little benefit we get in the end.

What's clear is that we pretty much only need the preloading behaviour when doing dynamic insertion (ie. doing so in javascript already) so why on earth should we still use script tag injection? It's not like we keep the tag there or use it as a DOM node: it's just a means to an end that has nothing to do with document structure.

I'd be much more comfortable with something along those lines:

window.loadScript( url, function( scriptObject ) {
    if ( !scriptObject.error ) {
        scriptObject.run();
    }
});

This would do wonders. It's easy enough to "join" multiple script loading events and then run those script in whatever order is necessary. It also doesn't imply the presence of a DOM which makes it even more generic. I wish we would get away from script tag injection altogether asap. Beside, it's easy enough to polyfill this using the tricks we all know. It's also far less of a burden than a complete require system (but can be a building brick for a require system that is then not limited to browsers).

That being said, I agree 100% with you on perceived performance, I just wanted to point it out because the "let's compact it all together" mantra is quickly becoming some kind of belief that blurs things far too much for my taste ;)

jaubourg commented Aug 9, 2011

@kyle

Yep, I followed the script tag "enhancements" discussion regarding preloading and I just don't buy the "add yet another attribute on the script tag" approach as a viable approach. I've seen what it did to the xhr spec: a lot of complexity in regard to the little benefit we get in the end.

What's clear is that we pretty much only need the preloading behaviour when doing dynamic insertion (ie. doing so in javascript already) so why on earth should we still use script tag injection? It's not like we keep the tag there or use it as a DOM node: it's just a means to an end that has nothing to do with document structure.

I'd be much more comfortable with something along those lines:

window.loadScript( url, function( scriptObject ) {
    if ( !scriptObject.error ) {
        scriptObject.run();
    }
});

This would do wonders. It's easy enough to "join" multiple script loading events and then run those script in whatever order is necessary. It also doesn't imply the presence of a DOM which makes it even more generic. I wish we would get away from script tag injection altogether asap. Beside, it's easy enough to polyfill this using the tricks we all know. It's also far less of a burden than a complete require system (but can be a building brick for a require system that is then not limited to browsers).

That being said, I agree 100% with you on perceived performance, I just wanted to point it out because the "let's compact it all together" mantra is quickly becoming some kind of belief that blurs things far too much for my taste ;)

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Aug 9, 2011

Member

fwiw, defer is supported in IE4+, Chrome, Safari, and FF 3.5+. Not supported in Opera.

So that means.... 98.5% of users have script@defer support already.

Member

paulirish commented Aug 9, 2011

fwiw, defer is supported in IE4+, Chrome, Safari, and FF 3.5+. Not supported in Opera.

So that means.... 98.5% of users have script@defer support already.

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Aug 9, 2011

Member

@getify

However, defer has a number of quirks to it,

details plz? i haven't seen anything about this

Member

paulirish commented Aug 9, 2011

@getify

However, defer has a number of quirks to it,

details plz? i haven't seen anything about this

@aaronpeters

This comment has been minimized.

Show comment
Hide comment
@aaronpeters

aaronpeters Aug 9, 2011

Do scripts with defer execute before or after DOM ready event fires?

Is execution order preserved in all browsers?

How about exec order and coupling external with inline scripts?

Do scripts with defer execute before or after DOM ready event fires?

Is execution order preserved in all browsers?

How about exec order and coupling external with inline scripts?

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 9, 2011

@paulirish--

...98.5% of users have script@defer support already.

support may be there in that many browsers, but that doesn't mean it's reliable in that many browsers. that's what i meant. (see below)

However, defer has a number of quirks to it,

details plz? i haven't seen anything about this

Lemme see... IIRC:

  1. support of defer on dynamic script elements isn't defined or supported in any browser... only works for script tags in the markup. this means it's completely useless for the "on-demand" or "lazy-loading" techniques and use-cases.
  2. i believe there was a case where in some browsers defer'd scripts would start executing immediately before DOM-ready was to fire, and in others, it happened immediately after DOM-ready fired. Will need to do more digging for more specifics on that.
  3. defer used on a script tag referencing an external resource behaved differently than defer specified on a script tag with inline code in it. That is, it couldn't be guaranteed to work to defer both types of scripts and have them still run in the correct order.
  4. defer on a script tag written out by a document.write() statement differed from a script tag in markup with defer.

I don't have a ton of details ready at my fingertips on these issues. I recall about 2 years ago (before LABjs) trying to use defer, and running into enough of them in cross-browser testing that I basically set it aside and haven't really re-visited it much since.


I should also point out that defer is not really the same thing as what LABjs (and other parallel loaders) provide. I said that above with the caveat that it's only sorta like it. In fact, what parallel script loading provides (at least, for LABjs' part), is "ordered async", which has absolutely no way to be achieved only through markup.

The difference between "ordered async" and "defer" is that "ordered async" will still start executing as soon as the first requested script is finished loading, whereas "defer" will wait until the `DOM-ready before starting executions. For a simple page with little markup and no other blocking markup calls (like other script tags), this difference is small. But for a page with lots of resources, when scripts are allowed to start executing can be drastically different.

So, I'd honestly like to not get too much off on the tangent of defer, because in reality it's not a great comparison to what parallel script loading provides. It was just the closest example in markup-only that I could use to describe the execution ordered behavior I was getting at. I probably shouldn't have even brought defer up -- just muddies the discussion.

Let me just rephrase from above: "For modern browsers, LABjs is a kind of 'polyfill' for 'ordered async' behavior, which is not possible to opt for in markup-only in any browser."

getify commented Aug 9, 2011

@paulirish--

...98.5% of users have script@defer support already.

support may be there in that many browsers, but that doesn't mean it's reliable in that many browsers. that's what i meant. (see below)

However, defer has a number of quirks to it,

details plz? i haven't seen anything about this

Lemme see... IIRC:

  1. support of defer on dynamic script elements isn't defined or supported in any browser... only works for script tags in the markup. this means it's completely useless for the "on-demand" or "lazy-loading" techniques and use-cases.
  2. i believe there was a case where in some browsers defer'd scripts would start executing immediately before DOM-ready was to fire, and in others, it happened immediately after DOM-ready fired. Will need to do more digging for more specifics on that.
  3. defer used on a script tag referencing an external resource behaved differently than defer specified on a script tag with inline code in it. That is, it couldn't be guaranteed to work to defer both types of scripts and have them still run in the correct order.
  4. defer on a script tag written out by a document.write() statement differed from a script tag in markup with defer.

I don't have a ton of details ready at my fingertips on these issues. I recall about 2 years ago (before LABjs) trying to use defer, and running into enough of them in cross-browser testing that I basically set it aside and haven't really re-visited it much since.


I should also point out that defer is not really the same thing as what LABjs (and other parallel loaders) provide. I said that above with the caveat that it's only sorta like it. In fact, what parallel script loading provides (at least, for LABjs' part), is "ordered async", which has absolutely no way to be achieved only through markup.

The difference between "ordered async" and "defer" is that "ordered async" will still start executing as soon as the first requested script is finished loading, whereas "defer" will wait until the `DOM-ready before starting executions. For a simple page with little markup and no other blocking markup calls (like other script tags), this difference is small. But for a page with lots of resources, when scripts are allowed to start executing can be drastically different.

So, I'd honestly like to not get too much off on the tangent of defer, because in reality it's not a great comparison to what parallel script loading provides. It was just the closest example in markup-only that I could use to describe the execution ordered behavior I was getting at. I probably shouldn't have even brought defer up -- just muddies the discussion.

Let me just rephrase from above: "For modern browsers, LABjs is a kind of 'polyfill' for 'ordered async' behavior, which is not possible to opt for in markup-only in any browser."

@aaronpeters

This comment has been minimized.

Show comment
Hide comment
@aaronpeters

aaronpeters Aug 9, 2011

I like "ordered async", that's a good phrase.

Kyle > afaik, scripts with defer will execute before onload, even before domready.
Scripts with async attribute will execute asap, and always before onload, but not necessarily before domready

I like "ordered async", that's a good phrase.

Kyle > afaik, scripts with defer will execute before onload, even before domready.
Scripts with async attribute will execute asap, and always before onload, but not necessarily before domready

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 9, 2011

@aaronpeters--
I think you may be slightly off track. Here's how I understand it:

async scripts (whether in markup or dynamically created) will execute ASAP, meaning any time before or after DOM-ready. In otherwords, async scripts should wait on nothing (except the JS engine availability itself). However, if they are requested before window.onload, then in almost all browsers they will "hold up" the window.onload event until they load and execute. I think there was a documented case where the async scripts didn't hold up window.onload, just not remembering the exact details.

defer on the other hand specifically means: wait until after DOM-ready. Moreover, there's a "queue" of all scripts with defer set on them, such that the queue is not processed until after DOM-ready This means they should all execute strictly after DOM-ready (or, rather, after the DOM is ready and finished parsing, to be exact). But they may be delayed even further (if loading is going slowly). They should hold up window.onload though. I just recall from vague past memory that in some versions of IE the actual practice of this theory was a bit fuzzy.

getify commented Aug 9, 2011

@aaronpeters--
I think you may be slightly off track. Here's how I understand it:

async scripts (whether in markup or dynamically created) will execute ASAP, meaning any time before or after DOM-ready. In otherwords, async scripts should wait on nothing (except the JS engine availability itself). However, if they are requested before window.onload, then in almost all browsers they will "hold up" the window.onload event until they load and execute. I think there was a documented case where the async scripts didn't hold up window.onload, just not remembering the exact details.

defer on the other hand specifically means: wait until after DOM-ready. Moreover, there's a "queue" of all scripts with defer set on them, such that the queue is not processed until after DOM-ready This means they should all execute strictly after DOM-ready (or, rather, after the DOM is ready and finished parsing, to be exact). But they may be delayed even further (if loading is going slowly). They should hold up window.onload though. I just recall from vague past memory that in some versions of IE the actual practice of this theory was a bit fuzzy.

@jaubourg

This comment has been minimized.

Show comment
Hide comment
@jaubourg

jaubourg Aug 10, 2011

@getify

Didn't want to derail this thread even more so I posted my thought on script preloading and your proposal on the WHATWG page here: http://jaubourg.net/driving-a-nail-with-a-screwdriver-the-way-web

@getify

Didn't want to derail this thread even more so I posted my thought on script preloading and your proposal on the WHATWG page here: http://jaubourg.net/driving-a-nail-with-a-screwdriver-the-way-web

@mathiasbynens

This comment has been minimized.

Show comment
Hide comment
@mathiasbynens

mathiasbynens Aug 10, 2011

Member

async scripts (whether in markup or dynamically created) will execute ASAP, meaning any time before or after DOM-ready. In otherwords, async scripts should wait on nothing (except the JS engine availability itself). However, if they are requested before window.onload, then in almost all browsers they will "hold up" the window.onload event until they load and execute.

This is probably easier to understand once you realize JavaScript is single threaded. (I know it took me a while…)

Similarly, if you use setTimeout(fn, 0) to download resources, and they enter the download queue before onload fires, then loading these resources will (still) delay onload.

I think there was a documented case where the async scripts didn't hold up window.onload, just not remembering the exact details.

I’d love to get more info on this. Please remember! :)

Member

mathiasbynens commented Aug 10, 2011

async scripts (whether in markup or dynamically created) will execute ASAP, meaning any time before or after DOM-ready. In otherwords, async scripts should wait on nothing (except the JS engine availability itself). However, if they are requested before window.onload, then in almost all browsers they will "hold up" the window.onload event until they load and execute.

This is probably easier to understand once you realize JavaScript is single threaded. (I know it took me a while…)

Similarly, if you use setTimeout(fn, 0) to download resources, and they enter the download queue before onload fires, then loading these resources will (still) delay onload.

I think there was a documented case where the async scripts didn't hold up window.onload, just not remembering the exact details.

I’d love to get more info on this. Please remember! :)

@artzstudio

This comment has been minimized.

Show comment
Hide comment
@artzstudio

artzstudio Aug 10, 2011

Yay script loaders!

A problem I have had implementing them across AOL's network of sites is dealing with race conditions. For example, loading jQuery asynchronously in the head, then say a jQuery plugin midway in the document asynchronously delivered inside a blog post.

Thusly, I started my own script loader science project (Boot.getJS) to deal with this. The idea is to download all scripts in parallel and execute them in order no matter what, as soon as possible. It also supports deferring to ready or load, and caching of scripts. Most ideas are borrowed (stolen) by people on this thread, so thanks guys. :)

Since you were discussing benchmarks I figured I'd share a test page I created to understand differences in performance, syntax and behavior of the various script loaders out there, check it out here:

http://artzstudio.com/files/Boot/test/benchmarks/script.html

To see how various loaders behave, clear cache, and watch the network requests and the final time as well as the order that the scripts execute in.

Yay script loaders!

A problem I have had implementing them across AOL's network of sites is dealing with race conditions. For example, loading jQuery asynchronously in the head, then say a jQuery plugin midway in the document asynchronously delivered inside a blog post.

Thusly, I started my own script loader science project (Boot.getJS) to deal with this. The idea is to download all scripts in parallel and execute them in order no matter what, as soon as possible. It also supports deferring to ready or load, and caching of scripts. Most ideas are borrowed (stolen) by people on this thread, so thanks guys. :)

Since you were discussing benchmarks I figured I'd share a test page I created to understand differences in performance, syntax and behavior of the various script loaders out there, check it out here:

http://artzstudio.com/files/Boot/test/benchmarks/script.html

To see how various loaders behave, clear cache, and watch the network requests and the final time as well as the order that the scripts execute in.

@aaronpeters

This comment has been minimized.

Show comment
Hide comment
@aaronpeters

aaronpeters Aug 10, 2011

Dave (@artzstudio), txs for sharing your thoughts and the link to your test page.

Question: why do you load LABjs on the '<script> tag in head' page? That seems wrong.

Dave (@artzstudio), txs for sharing your thoughts and the link to your test page.

Question: why do you load LABjs on the '<script> tag in head' page? That seems wrong.

@aaronpeters

This comment has been minimized.

Show comment
Hide comment
@aaronpeters

aaronpeters Aug 10, 2011

@artzstudio also, you are using an old version of LABjs. Is that intentional? If so, why?

@artzstudio also, you are using an old version of LABjs. Is that intentional? If so, why?

@artzstudio

This comment has been minimized.

Show comment
Hide comment
@artzstudio

artzstudio Aug 10, 2011

@aaronpeters At AOL we have scripts like Omniture an Ad code (and more) that need to go in the head, so thats where the loader library goes in our use case. Also when scripts are at the bottom, there's a FOUC issue in some of our widgets so the sooner dependencies load (like jQuery) the better.

It was not intentional, this test is a couple months old. I'll update the libraries when I get a chance.

@aaronpeters At AOL we have scripts like Omniture an Ad code (and more) that need to go in the head, so thats where the loader library goes in our use case. Also when scripts are at the bottom, there's a FOUC issue in some of our widgets so the sooner dependencies load (like jQuery) the better.

It was not intentional, this test is a couple months old. I'll update the libraries when I get a chance.

@kornelski

This comment has been minimized.

Show comment
Hide comment
@kornelski

kornelski Aug 10, 2011

Using connection Keep-Alive, it's possible you can get 2 or 3 simultaneous connections (without 2-3 full connection overhead penalties)

HTTP doesn't mux/interleave responses, so you can't have parallel downloads without opening multiple connections first. The ideal case of persistent and pipelined connection is equal to contiguous download of a single file (+ few headers).

Using connection Keep-Alive, it's possible you can get 2 or 3 simultaneous connections (without 2-3 full connection overhead penalties)

HTTP doesn't mux/interleave responses, so you can't have parallel downloads without opening multiple connections first. The ideal case of persistent and pipelined connection is equal to contiguous download of a single file (+ few headers).

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 10, 2011

@pornel--

I have seen first-hand and validated that browsers can open up multiple connections in parallel to a single server, where with Connection Keep-Alive in play, the overhead for the second and third connections is drastically less than for the first. That is the effect I'm talking about.

getify commented Aug 10, 2011

@pornel--

I have seen first-hand and validated that browsers can open up multiple connections in parallel to a single server, where with Connection Keep-Alive in play, the overhead for the second and third connections is drastically less than for the first. That is the effect I'm talking about.

@jashkenas

This comment has been minimized.

Show comment
Hide comment
@jashkenas

jashkenas Aug 10, 2011

@getify Fantastic, I think we've reached some sort of consensus. To refresh your memory:

I can anticipate a counterargument about loading your scripts in bits and pieces ...
but that's entirely orthogonal to the script loading technique, so please, leave it out
of the discussion.

Yes, I agree that loading your volatile scripts in a different JS file than your permanent scripts is great. Loading the script that is only needed for a specific page, only on that specific page, is similarly great.

So if I'm a web developer and I've got a page with a bunch of JavaScripts, what should I do? Use LABjs, or concatenate my permanent scripts into one file, and my volatile scripts into another, and load both at the bottom of the body tag with <script defer="true">?

Why should I subject my app to caching headaches, browser incompatibilities, race-against-the-images-on-the-page, and the rest of the trouble that a script loader brings along?

If the entire premise of using a script loader for performance is that it's easier and simpler than using two script tags ... I've got a bridge in Brooklyn to sell you.

@getify Fantastic, I think we've reached some sort of consensus. To refresh your memory:

I can anticipate a counterargument about loading your scripts in bits and pieces ...
but that's entirely orthogonal to the script loading technique, so please, leave it out
of the discussion.

Yes, I agree that loading your volatile scripts in a different JS file than your permanent scripts is great. Loading the script that is only needed for a specific page, only on that specific page, is similarly great.

So if I'm a web developer and I've got a page with a bunch of JavaScripts, what should I do? Use LABjs, or concatenate my permanent scripts into one file, and my volatile scripts into another, and load both at the bottom of the body tag with <script defer="true">?

Why should I subject my app to caching headaches, browser incompatibilities, race-against-the-images-on-the-page, and the rest of the trouble that a script loader brings along?

If the entire premise of using a script loader for performance is that it's easier and simpler than using two script tags ... I've got a bridge in Brooklyn to sell you.

@rkh

This comment has been minimized.

Show comment
Hide comment
@rkh

rkh Aug 10, 2011

@getify having implemented a web server more than once: keep-alive does not affect concurrent requests in any way and only reduces the costs of subsequent requests. A split body with two subsequent requests with keep-alive is still more expensive than a single request. Having two concurrent requests for the two body parts will probably perform better, but keep in mind that the browser will only open a limited number of concurrent requests (depending on the browser and config something around 5, I think), which is fine if all you do is loading your three js files, but is, as @jashkenas pointed out more than once, an issue if you have other assets, like images or css files.

rkh commented Aug 10, 2011

@getify having implemented a web server more than once: keep-alive does not affect concurrent requests in any way and only reduces the costs of subsequent requests. A split body with two subsequent requests with keep-alive is still more expensive than a single request. Having two concurrent requests for the two body parts will probably perform better, but keep in mind that the browser will only open a limited number of concurrent requests (depending on the browser and config something around 5, I think), which is fine if all you do is loading your three js files, but is, as @jashkenas pointed out more than once, an issue if you have other assets, like images or css files.

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 10, 2011

@jashkenas-

So if I'm a web developer and I've got a page with a bunch of JavaScripts, what should I do? Use LABjs, or concatenate my permanent scripts into one file, and my volatile scripts into another, and load both at the bottom of the body tag with <script defer="true">?

TL;DR: both

Firstly, a lot of sites on the web are assembled by CMS's, which means that having inline script blocks strewn throughout the page is common, and VERY difficult to solve maintenance-wise by just saying "move all that code into one file". So, I think the premise that most sites can get away without having any "inline code" to run after another external script loads and executes is unlikely, at best.

Secondly, I've proven that defer acts differently with respect to DOMContentLoaded in various browsers. In some browsers, the scripts go before DOM-ready, in other browsers, they go after DOM-ready. If you have code in your scripts which relies on happening before or after DOM-ready, using defer can be a problem. It's especially true that it's a sensitive area with a lot of misunderstanding and confusion, so it quickly becomes "this is not a simple straightforward solution". It takes a lot more thought.

Thirdly, I think for a lot of sites, changing their markup to use $LAB.script() instead of &lt;script> is a lot easier than explaining to them how to install some automated (or manual) bulid process on their server. Especially if that site is on shared-hosting (most of the web is), and they don't really control much of their server, asking them to figure out build processes so that their code maintainability is not lost is... well... non-trivial.

Can these things be overcome? Yep. Of course they can. But they take a lot of work. In some cases (like the DOM-ready thing) they may take actually painstakingly adjusting your code. It takes a person with dedicated efforts and lots of expertise and passion in this area to sort it all out.

By contrast, they can get a "quick win" dropping in LABjs instead of the &lt;script> tag. There's little that they have to think about (except document.write()). Most of the time, "it just works". And most of the time, they see an immediate speed increase in page load. For most sites, that's a big win.

So, to answer your question, I'd say, as I said before, do both... First drop in LABjs, see some immediate speed increases. Now, consider strongly the benefits of using a build process to move you from 15 files down to 2 files (1 file chunked in half). When you do that (if you do that, which as I said, most won't), you can ditch LABjs if you really want. But there's no real harm (it's small and caches well, even on mobile). It'll continue to load your two file chunks well, AND it'll do so without the quirks that defer might cause.

Also, having LABjs already there makes it stupidly simple for you to do step 3, which is to start figuring out what code you can "lazy/on-demand load" later. You can't do that without a script loader. Having LABjs already there and familiar means you don't have to worry about how to load that on-demand script at all -- it's already figured out.

getify commented Aug 10, 2011

@jashkenas-

So if I'm a web developer and I've got a page with a bunch of JavaScripts, what should I do? Use LABjs, or concatenate my permanent scripts into one file, and my volatile scripts into another, and load both at the bottom of the body tag with <script defer="true">?

TL;DR: both

Firstly, a lot of sites on the web are assembled by CMS's, which means that having inline script blocks strewn throughout the page is common, and VERY difficult to solve maintenance-wise by just saying "move all that code into one file". So, I think the premise that most sites can get away without having any "inline code" to run after another external script loads and executes is unlikely, at best.

Secondly, I've proven that defer acts differently with respect to DOMContentLoaded in various browsers. In some browsers, the scripts go before DOM-ready, in other browsers, they go after DOM-ready. If you have code in your scripts which relies on happening before or after DOM-ready, using defer can be a problem. It's especially true that it's a sensitive area with a lot of misunderstanding and confusion, so it quickly becomes "this is not a simple straightforward solution". It takes a lot more thought.

Thirdly, I think for a lot of sites, changing their markup to use $LAB.script() instead of &lt;script> is a lot easier than explaining to them how to install some automated (or manual) bulid process on their server. Especially if that site is on shared-hosting (most of the web is), and they don't really control much of their server, asking them to figure out build processes so that their code maintainability is not lost is... well... non-trivial.

Can these things be overcome? Yep. Of course they can. But they take a lot of work. In some cases (like the DOM-ready thing) they may take actually painstakingly adjusting your code. It takes a person with dedicated efforts and lots of expertise and passion in this area to sort it all out.

By contrast, they can get a "quick win" dropping in LABjs instead of the &lt;script> tag. There's little that they have to think about (except document.write()). Most of the time, "it just works". And most of the time, they see an immediate speed increase in page load. For most sites, that's a big win.

So, to answer your question, I'd say, as I said before, do both... First drop in LABjs, see some immediate speed increases. Now, consider strongly the benefits of using a build process to move you from 15 files down to 2 files (1 file chunked in half). When you do that (if you do that, which as I said, most won't), you can ditch LABjs if you really want. But there's no real harm (it's small and caches well, even on mobile). It'll continue to load your two file chunks well, AND it'll do so without the quirks that defer might cause.

Also, having LABjs already there makes it stupidly simple for you to do step 3, which is to start figuring out what code you can "lazy/on-demand load" later. You can't do that without a script loader. Having LABjs already there and familiar means you don't have to worry about how to load that on-demand script at all -- it's already figured out.

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 10, 2011

@rkh--

I had it demonstrated to me (specifically in Apache, with toggling the Keep-Alive setting) how multiple parallel requests were affected (positively when Keep-Alive was there). I'm no expert in this area, so arguing the exact details of how it works or not is beyond me. I can say that the timing of request #2 was less that the timing of request #1, when Keep-Alive was there. How the browser and server did that, I can only make partially-informed guesses at.

A split body with two subsequent requests with keep-alive is still more expensive than a single request.

I never argued that the second request is free. I argued that the second request is not as expensive as the first request. So, if we assume that at least one request must be made, having a second request in parallel is NOT the same thing as having two completely independent connections to the same server, in terms of overhead or time costs.

By way of estimate, it seemed like Request #1 was X to service, and #2 in parallel with Keep-Alive present was 0.7X. It was explained to me that the server was able to utilize some of the existing connection overhead in servicing the second request, thereby making it a little cheaper. With Keep-Alive turned off, the second request had no such measurable decrease.


All this discussion is a seriously deep rabbit hole though. I'm no server expert. I don't have to be. I can only explain that I have actually seen (and created tests) around this exact topic... can I test that single 100k file load time vs. loading two halves of that same file in parallel, and will the second test be any measurable amount faster. As I've said, I saw, somewhere between 15-25% faster with the chunked-in-parallel test. How it did that, and managed to somehow overtake the awful "OMG HTTP RESPONSE OVERHEAD IS TERRIBLE" effect and still benefit from two parallel loadings, I guess I'm not qualified to scientifically prove. But it definitely did by obvservation.

getify commented Aug 10, 2011

@rkh--

I had it demonstrated to me (specifically in Apache, with toggling the Keep-Alive setting) how multiple parallel requests were affected (positively when Keep-Alive was there). I'm no expert in this area, so arguing the exact details of how it works or not is beyond me. I can say that the timing of request #2 was less that the timing of request #1, when Keep-Alive was there. How the browser and server did that, I can only make partially-informed guesses at.

A split body with two subsequent requests with keep-alive is still more expensive than a single request.

I never argued that the second request is free. I argued that the second request is not as expensive as the first request. So, if we assume that at least one request must be made, having a second request in parallel is NOT the same thing as having two completely independent connections to the same server, in terms of overhead or time costs.

By way of estimate, it seemed like Request #1 was X to service, and #2 in parallel with Keep-Alive present was 0.7X. It was explained to me that the server was able to utilize some of the existing connection overhead in servicing the second request, thereby making it a little cheaper. With Keep-Alive turned off, the second request had no such measurable decrease.


All this discussion is a seriously deep rabbit hole though. I'm no server expert. I don't have to be. I can only explain that I have actually seen (and created tests) around this exact topic... can I test that single 100k file load time vs. loading two halves of that same file in parallel, and will the second test be any measurable amount faster. As I've said, I saw, somewhere between 15-25% faster with the chunked-in-parallel test. How it did that, and managed to somehow overtake the awful "OMG HTTP RESPONSE OVERHEAD IS TERRIBLE" effect and still benefit from two parallel loadings, I guess I'm not qualified to scientifically prove. But it definitely did by obvservation.

@savetheclocktower

This comment has been minimized.

Show comment
Hide comment
@savetheclocktower

savetheclocktower Aug 10, 2011

Christ, you people type fast. I finish reading, reload the page, and there are like nine more comments.

I need help. I've tried to pinpoint exactly where in this thread we went from discussing what works best for a boilerplate HTML file to discussing whether script loaders are, in all cases, snake oil.

@getify, you should certainly defend LABjs and respond to specific criticisms made by others in the thread, but (excepting @jashkenas) I think those who criticize LABjs are doing so in order to demonstrate that it's not the best solution for a boilerplate. You argue that it's easier to convert legacy pages to LABjs than to script[defer], and that might be true, but how does that apply to a boilerplate HTML file (which is, by definition, starting from scratch)?

You say that it's designed for people who don't have fancy build processes, but you also seem to advocate concatenating, splitting into equal-sized chunks, and loading in parallel. Isn't that a task for a build script? Again, it seems like the wrong choice for a boilerplate designed to give the user intelligent defaults. If a user wants that purported 20-30% speed increase, she can choose to upgrade later over what the boilerplate offers, but that's not a trivial task.

Having said all that, if you guys want to carry on with the general topic ("Script Loaders: Valuable Tool or Snake Oil?"), I'll happily hang around and make some popcorn.

Christ, you people type fast. I finish reading, reload the page, and there are like nine more comments.

I need help. I've tried to pinpoint exactly where in this thread we went from discussing what works best for a boilerplate HTML file to discussing whether script loaders are, in all cases, snake oil.

@getify, you should certainly defend LABjs and respond to specific criticisms made by others in the thread, but (excepting @jashkenas) I think those who criticize LABjs are doing so in order to demonstrate that it's not the best solution for a boilerplate. You argue that it's easier to convert legacy pages to LABjs than to script[defer], and that might be true, but how does that apply to a boilerplate HTML file (which is, by definition, starting from scratch)?

You say that it's designed for people who don't have fancy build processes, but you also seem to advocate concatenating, splitting into equal-sized chunks, and loading in parallel. Isn't that a task for a build script? Again, it seems like the wrong choice for a boilerplate designed to give the user intelligent defaults. If a user wants that purported 20-30% speed increase, she can choose to upgrade later over what the boilerplate offers, but that's not a trivial task.

Having said all that, if you guys want to carry on with the general topic ("Script Loaders: Valuable Tool or Snake Oil?"), I'll happily hang around and make some popcorn.

@kornelski

This comment has been minimized.

Show comment
Hide comment
@kornelski

kornelski Aug 10, 2011

@getify: I can agree that 2nd and 3rd connections might be opened faster than the first – the first one waits for DNS and possibly routing the very first packet to the server is a bit slower than routing the rest alongside the same path. In HTTPS SSL session cache helps subsequent connections a lot.

However, I don't see relevance of Keep-Alive in this situation. Subsequent requests on the same connection are started faster with Keep-Alive, but those requests are serial within the connection.

@getify: I can agree that 2nd and 3rd connections might be opened faster than the first – the first one waits for DNS and possibly routing the very first packet to the server is a bit slower than routing the rest alongside the same path. In HTTPS SSL session cache helps subsequent connections a lot.

However, I don't see relevance of Keep-Alive in this situation. Subsequent requests on the same connection are started faster with Keep-Alive, but those requests are serial within the connection.

@jashkenas

This comment has been minimized.

Show comment
Hide comment
@jashkenas

jashkenas Aug 10, 2011

I'm about done here -- I just reached my "mad as hell and not going to take it anymore" moment with respect to script loaders.

That said, I think that this thread, for a flame fest, has actually been quite productive. If LABjs wants to stake out a claim for the hapless and incompetent web sites, and leave people who actually want to have their sites load fast alone, it's a great step forward.

I'm about done here -- I just reached my "mad as hell and not going to take it anymore" moment with respect to script loaders.

That said, I think that this thread, for a flame fest, has actually been quite productive. If LABjs wants to stake out a claim for the hapless and incompetent web sites, and leave people who actually want to have their sites load fast alone, it's a great step forward.

@peterbraden

This comment has been minimized.

Show comment
Hide comment

dude, chill

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 10, 2011

@savetheclocktower--

Fair questions.

I didn't start my participation in this thread strongly advocating for LABjs (or any script loader) to be included in h5bp. I think it's useful (see below), but it wasn't a major concern of mine that I was losing sleep over. Clearly, this thread has morphed into an all out attack on everything that is "script loading". That is, obviously, something I care a bit more about.

You say that it's designed for people who don't have fancy build processes, but you also seem to advocate concatenating, splitting into equal-sized chunks, and loading in parallel. Isn't that a task for a build script?

I advocate first for moving all your dozens of script tags to a parallel script loader like LABjs. This takes nothing more than the ability to adjust your markup. That's a far easier/less intimidating step than telling a mom&pop site to use an automated node.js-based build system, for instance.

And for those who CAN do builds of their files, I advocate that LABjs still has benefit, because it can help you load those chunks in parallel. If you flat out disagree that chunks are in any way useful, then you won't see any reason to use LABjs over defer. But if you can see why chunking may be helpful, it should then follow that a script loader may also assist in that process.

Again, it seems like the wrong choice for a boilerplate designed to give the user intelligent defaults.

The only reason I think a script loader (specifically one which is designed, like LABjs, to have a one-to-one mapping between script tags and script() calls) has a benefit in a boilerplate is that in a boilerplate, you often see one instance of something (like a tag), and your tendency in building out your page is to just copy-n-paste duplicate that as many times as you need it. So, if you have a poorly performing pattern (script tag) in the boilerplate, people's tendency will be to duplicate the script tag a dozen times. I think, on average, if they instead duplicated the $LAB.script() call a bunch of times, there's a decent chance their performance won't be quite as bad as it would have been.

That's the only reason I started participating in this thread. It's the only reason I took issue with @paulirish's "blind faith" comment WAY above here in the thread.

getify commented Aug 10, 2011

@savetheclocktower--

Fair questions.

I didn't start my participation in this thread strongly advocating for LABjs (or any script loader) to be included in h5bp. I think it's useful (see below), but it wasn't a major concern of mine that I was losing sleep over. Clearly, this thread has morphed into an all out attack on everything that is "script loading". That is, obviously, something I care a bit more about.

You say that it's designed for people who don't have fancy build processes, but you also seem to advocate concatenating, splitting into equal-sized chunks, and loading in parallel. Isn't that a task for a build script?

I advocate first for moving all your dozens of script tags to a parallel script loader like LABjs. This takes nothing more than the ability to adjust your markup. That's a far easier/less intimidating step than telling a mom&pop site to use an automated node.js-based build system, for instance.

And for those who CAN do builds of their files, I advocate that LABjs still has benefit, because it can help you load those chunks in parallel. If you flat out disagree that chunks are in any way useful, then you won't see any reason to use LABjs over defer. But if you can see why chunking may be helpful, it should then follow that a script loader may also assist in that process.

Again, it seems like the wrong choice for a boilerplate designed to give the user intelligent defaults.

The only reason I think a script loader (specifically one which is designed, like LABjs, to have a one-to-one mapping between script tags and script() calls) has a benefit in a boilerplate is that in a boilerplate, you often see one instance of something (like a tag), and your tendency in building out your page is to just copy-n-paste duplicate that as many times as you need it. So, if you have a poorly performing pattern (script tag) in the boilerplate, people's tendency will be to duplicate the script tag a dozen times. I think, on average, if they instead duplicated the $LAB.script() call a bunch of times, there's a decent chance their performance won't be quite as bad as it would have been.

That's the only reason I started participating in this thread. It's the only reason I took issue with @paulirish's "blind faith" comment WAY above here in the thread.

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Aug 10, 2011

Member

Sooooooooooo yeah.


I think it's clear this discussion has moved on way past whether a script loader is appropriate for the h5bp project. But that's good, as this topic is worth exploring.


regardless, I'm very interested in reproducible test cases alongside test results.

It also seems the spec for @defer was written to protect some of the erratic behavior that browsers deliver along with it. That behavior should be documented. I can help migrate it to the MDC when its ready.

We need straight up documentation on these behaviors that captures all browsers, different connection types and network effects. I'm not sure if a test rig should use cuzillion or assetrace, but that can be determined.

I've set up a ticket to gather some interest in that paulirish/lazyweb-requests#42

Join me over there if you're into the superfun tasks of webperf research and documenting evidence.

Let's consider this thread closed, gentlemen.

Member

paulirish commented Aug 10, 2011

Sooooooooooo yeah.


I think it's clear this discussion has moved on way past whether a script loader is appropriate for the h5bp project. But that's good, as this topic is worth exploring.


regardless, I'm very interested in reproducible test cases alongside test results.

It also seems the spec for @defer was written to protect some of the erratic behavior that browsers deliver along with it. That behavior should be documented. I can help migrate it to the MDC when its ready.

We need straight up documentation on these behaviors that captures all browsers, different connection types and network effects. I'm not sure if a test rig should use cuzillion or assetrace, but that can be determined.

I've set up a ticket to gather some interest in that paulirish/lazyweb-requests#42

Join me over there if you're into the superfun tasks of webperf research and documenting evidence.

Let's consider this thread closed, gentlemen.

@millermedeiros

This comment has been minimized.

Show comment
Hide comment
@millermedeiros

millermedeiros Aug 10, 2011

Lazy loading isn't the core benefit of AMD modules as @jrburke described on his comments.. The main reason that I choose to use AMD modules as much as I can is because it improves code structure. It keeps the source files small and concise - easier to develop and maintain - the same way that using css @import during dev and running an automated build to combine stylesheets is also recommended for large projects...

I feel that this post I wrote last year fits the subject: The performance dogma - It's not all about performance and make sure you aren't wasting your time "optimizing" something that doesn't make any real difference...

And I'm with @SlexAxton, I want AMD but simple script tags are probably enough for most people. Maybe a valid approach would be to add a new setting to pick AMD project and run RequireJS optimizer instead of the concat tasks (RequireJS optimizer Ant task), that would be pretty cool and probably not that hard to implement.

Lazy loading isn't the core benefit of AMD modules as @jrburke described on his comments.. The main reason that I choose to use AMD modules as much as I can is because it improves code structure. It keeps the source files small and concise - easier to develop and maintain - the same way that using css @import during dev and running an automated build to combine stylesheets is also recommended for large projects...

I feel that this post I wrote last year fits the subject: The performance dogma - It's not all about performance and make sure you aren't wasting your time "optimizing" something that doesn't make any real difference...

And I'm with @SlexAxton, I want AMD but simple script tags are probably enough for most people. Maybe a valid approach would be to add a new setting to pick AMD project and run RequireJS optimizer instead of the concat tasks (RequireJS optimizer Ant task), that would be pretty cool and probably not that hard to implement.

@benatkin

This comment has been minimized.

Show comment
Hide comment
@benatkin

benatkin Aug 10, 2011

Let's consider this thread closed, gentlemen.

@paulirish What about including AMD support? Where should we discuss that?

Let's consider this thread closed, gentlemen.

@paulirish What about including AMD support? Where should we discuss that?

@paulirish

This comment has been minimized.

Show comment
Hide comment
@paulirish

paulirish Aug 10, 2011

Member

@benatkin open a new ticket bro.

Member

paulirish commented Aug 10, 2011

@benatkin open a new ticket bro.

@benatkin

This comment has been minimized.

Show comment
Hide comment
@benatkin

benatkin Aug 10, 2011

@paulirish OK, thanks. @jrburke would you please open up a new ticket to continue the discussion you started? I think I'll add a comment, but I don't think I can lay out a case for AMD support as well as you can.

@paulirish OK, thanks. @jrburke would you please open up a new ticket to continue the discussion you started? I think I'll add a comment, but I don't think I can lay out a case for AMD support as well as you can.

@screenm0nkey

This comment has been minimized.

Show comment
Hide comment
@screenm0nkey

screenm0nkey Aug 10, 2011

Entertaining and informative. Thanks guys.

Entertaining and informative. Thanks guys.

@getify

This comment has been minimized.

Show comment
Hide comment
@getify

getify Aug 10, 2011

I think someone needs to start a new script loader project and called it "Issue28". :)

getify commented Aug 10, 2011

I think someone needs to start a new script loader project and called it "Issue28". :)

@GarrettS

This comment has been minimized.

Show comment
Hide comment
@GarrettS

GarrettS Aug 10, 2011

For widest compat, fast performance can be had by putting script at bottom, minify, gzip, but don't defer. At least not until browser compatibility is consistent for a few years straight.

Bottlenecks can come from ads, too much javascript, bloated HTML, too much CSS, too many iframes, too many requests, server latency, inefficient javascript. Applications that use a lot of third party libs have problems caused by not just too much javascript, but more than that, they tend to also have many other problems, mostly bloated HTML, invalid HTML, too much css, and inefficient javascript. Twitter comes right to mind, with having two version of jQuery and two onscroll handlers that cause a bouncing right column onscroll.

The kicker is that if you know what you're doing, you can avoid those problems. You don't need things like jQuery or underscore, and so your scripts are much smaller. You write clean, simple, valid HTML and CSS. Consequentially, your pages load faster, the app is more flexible in terms of change, and SEO improves. And so then using a script loader just adds unwarranted complexity and overhead.

For widest compat, fast performance can be had by putting script at bottom, minify, gzip, but don't defer. At least not until browser compatibility is consistent for a few years straight.

Bottlenecks can come from ads, too much javascript, bloated HTML, too much CSS, too many iframes, too many requests, server latency, inefficient javascript. Applications that use a lot of third party libs have problems caused by not just too much javascript, but more than that, they tend to also have many other problems, mostly bloated HTML, invalid HTML, too much css, and inefficient javascript. Twitter comes right to mind, with having two version of jQuery and two onscroll handlers that cause a bouncing right column onscroll.

The kicker is that if you know what you're doing, you can avoid those problems. You don't need things like jQuery or underscore, and so your scripts are much smaller. You write clean, simple, valid HTML and CSS. Consequentially, your pages load faster, the app is more flexible in terms of change, and SEO improves. And so then using a script loader just adds unwarranted complexity and overhead.

@BroDotJS

This comment has been minimized.

Show comment
Hide comment
@BroDotJS

BroDotJS Aug 11, 2011

https://github.com/BroDotJS/AssetRage

BOOM! I close the clubs and I close the threads.

https://github.com/BroDotJS/AssetRage

BOOM! I close the clubs and I close the threads.

@aaronpeters

This comment has been minimized.

Show comment
Hide comment
@aaronpeters

aaronpeters Aug 11, 2011

What a thread ... wow.

Imo, the discussion started in the context of the h5bp, which is intended to be a starting point for web devs.
As such, you can state that the webdev using the h5bp will actually have clean HTML, clean CSS, a good .htaccess etc and maybe even not suffer from too many images, inefficient JS, lots of crappy third party JS etc. You know, because the web dev choosing to use the high performance h5bp and by that is concerned about performance, and will pay attention to the non-h5bp stuff that goes onto the page(s).

From the thread, and in this context, I think there is unfortunately not enough evidence to draw a final conclusion.
I am with Paul on getting the research going and documenting what needs to be documented.
Count me in Paul.

What a thread ... wow.

Imo, the discussion started in the context of the h5bp, which is intended to be a starting point for web devs.
As such, you can state that the webdev using the h5bp will actually have clean HTML, clean CSS, a good .htaccess etc and maybe even not suffer from too many images, inefficient JS, lots of crappy third party JS etc. You know, because the web dev choosing to use the high performance h5bp and by that is concerned about performance, and will pay attention to the non-h5bp stuff that goes onto the page(s).

From the thread, and in this context, I think there is unfortunately not enough evidence to draw a final conclusion.
I am with Paul on getting the research going and documenting what needs to be documented.
Count me in Paul.

@aaronpeters

This comment has been minimized.

Show comment
Hide comment
@aaronpeters

aaronpeters Aug 11, 2011

Sidenote. I am not very familiar with AMD and from a first look, it seemds intimidating to me, or at least not something I can pick up very easily. I think most 'ordinary' web devs will agree.
The stuff you see in the h5bp needs to have a low entry barrier, or it will not be used and uptake of h5bp may be slower than it could be without it.
I doubt something like AMD belongs in the h5bp.
Keep it simple.

Sidenote. I am not very familiar with AMD and from a first look, it seemds intimidating to me, or at least not something I can pick up very easily. I think most 'ordinary' web devs will agree.
The stuff you see in the h5bp needs to have a low entry barrier, or it will not be used and uptake of h5bp may be slower than it could be without it.
I doubt something like AMD belongs in the h5bp.
Keep it simple.

@aaronpeters

This comment has been minimized.

Show comment
Hide comment
@aaronpeters

aaronpeters Aug 11, 2011

And another comment ....
'Putting scripts at the bottom' and 'Concatenate JS files into a single file' has been high up on the Web Perf Best Practices list for many years. So why do >90% of the average sites out there, built by in-house developers and by the top brand agencies still have multiple script tags in the HEAD? Really, why is that?

And the other 9% have a single, concatenated JS file ... in the HEAD.
Rarely do I see a 'normal' site which is not built by some top web perf dev with one script at the bottom.

Devs keep building sites like they have been for years.
Site owners care most about design and features, so that's what the devs spend their time on.

Changing a way of working, a build system, the code ... it has to be easy, very easy, or else it won't happen.

I have worked on many sites where combining the JS in the HEAD into a single file and loading it a bottom of BODY broke the pages on the site. And then what? In most cases, it's not simply an hour work to fix that. Serious refactoring needs to take place ... and this does not happen because of the lack of knowledge and, especially, the lack of time.

(oh right, the thread is closed...)

And another comment ....
'Putting scripts at the bottom' and 'Concatenate JS files into a single file' has been high up on the Web Perf Best Practices list for many years. So why do >90% of the average sites out there, built by in-house developers and by the top brand agencies still have multiple script tags in the HEAD? Really, why is that?

And the other 9% have a single, concatenated JS file ... in the HEAD.
Rarely do I see a 'normal' site which is not built by some top web perf dev with one script at the bottom.

Devs keep building sites like they have been for years.
Site owners care most about design and features, so that's what the devs spend their time on.

Changing a way of working, a build system, the code ... it has to be easy, very easy, or else it won't happen.

I have worked on many sites where combining the JS in the HEAD into a single file and loading it a bottom of BODY broke the pages on the site. And then what? In most cases, it's not simply an hour work to fix that. Serious refactoring needs to take place ... and this does not happen because of the lack of knowledge and, especially, the lack of time.

(oh right, the thread is closed...)

@GarrettS

This comment has been minimized.

Show comment
Hide comment
@GarrettS

GarrettS Aug 11, 2011

We're talking about a library build on top of jQuery and Modernizr. Says it all, really. Who uses that? Oh, shit, I forget, Twitter.com, which uses two jQuerys and also has in source code, the following:

Line 352, Column 6: End tag div seen, but there were open elements.
Error Line 350, Column 6: Unclosed element ul.
Error Line 330, Column 6: Unclosed element ul.

And the problem with expecting the browser to error correct that is that HTML4 didn't define error correction mechanisms and so you'll end up with a who-knows-what who-knows-where. Sure, HTML5 defines error handling, but it ain't retroactive -- there's still plenty of "old" browsers out there.

And speaking of shit, anyone here had a look at jQuery ES5 shims?

BTW, do you have anything to add to that statement of yours "that the webdev using the h5bp will actually have clean HTML," aaronpeters?

We're talking about a library build on top of jQuery and Modernizr. Says it all, really. Who uses that? Oh, shit, I forget, Twitter.com, which uses two jQuerys and also has in source code, the following:

Line 352, Column 6: End tag div seen, but there were open elements.
Error Line 350, Column 6: Unclosed element ul.
Error Line 330, Column 6: Unclosed element ul.

And the problem with expecting the browser to error correct that is that HTML4 didn't define error correction mechanisms and so you'll end up with a who-knows-what who-knows-where. Sure, HTML5 defines error handling, but it ain't retroactive -- there's still plenty of "old" browsers out there.

And speaking of shit, anyone here had a look at jQuery ES5 shims?

BTW, do you have anything to add to that statement of yours "that the webdev using the h5bp will actually have clean HTML," aaronpeters?

@aaronpeters

This comment has been minimized.

Show comment
Hide comment
@aaronpeters

aaronpeters Aug 11, 2011

@GarrettS ok, ok, I should have written "will probably have clean HTML"

@GarrettS ok, ok, I should have written "will probably have clean HTML"

@GarrettS

This comment has been minimized.

Show comment
Hide comment
@GarrettS

GarrettS Aug 11, 2011

:-D we can always hope!

:-D we can always hope!

@jashkenas

This comment has been minimized.

Show comment
Hide comment
@jashkenas

jashkenas Aug 16, 2011

Beating a dead horse, I know ... but it turns out that at the same time we were having this scintillating discussion, the current version of LABjs actually had a bug that caused JavaScript to execute in the wrong order in some browsers: getify/LABjs#36

Oh, the irony.

Beating a dead horse, I know ... but it turns out that at the same time we were having this scintillating discussion, the current version of LABjs actually had a bug that caused JavaScript to execute in the wrong order in some browsers: getify/LABjs#36

Oh, the irony.

@brianleroux

This comment has been minimized.

Show comment
Hide comment
@brianleroux

brianleroux Aug 16, 2011

must. resist. posting. totally. [in]appropriate. image. for. previous. statement.... aggggh! AGONY!

must. resist. posting. totally. [in]appropriate. image. for. previous. statement.... aggggh! AGONY!

@danbeam

This comment has been minimized.

Show comment
Hide comment
@danbeam

danbeam Aug 16, 2011

My favorite part was when the dude that made dhtmlkitchen.com (currently totally messed up) started talking about markup errors.

danbeam commented Aug 16, 2011

My favorite part was when the dude that made dhtmlkitchen.com (currently totally messed up) started talking about markup errors.

@GarrettS

This comment has been minimized.

Show comment
Hide comment
@GarrettS

GarrettS Aug 17, 2011

That site has been transferred to Paulo Fragomeni. Yes I made it and proud of what I wrote there, as here. Go take a screenshot of your weak avatar, jackass.

That site has been transferred to Paulo Fragomeni. Yes I made it and proud of what I wrote there, as here. Go take a screenshot of your weak avatar, jackass.

@GarrettS

This comment has been minimized.

Show comment
Hide comment
@GarrettS

GarrettS Aug 17, 2011

...and after you're done with that, try to pull your head out of your ass and understand the difference between my old personal website (which is no longer maintained by me) and one that is developed by a team and financed by a profitable, multi-million dollar company (though Twitter may be worth billions AFAIK).

...and after you're done with that, try to pull your head out of your ass and understand the difference between my old personal website (which is no longer maintained by me) and one that is developed by a team and financed by a profitable, multi-million dollar company (though Twitter may be worth billions AFAIK).

@masondesu

This comment has been minimized.

Show comment
Hide comment
@masondesu

masondesu Aug 17, 2011

Glad we're keeping this classy, and on topic, guys.

Glad we're keeping this classy, and on topic, guys.

@GarrettS

This comment has been minimized.

Show comment
Hide comment
@GarrettS

GarrettS Aug 17, 2011

jashkenas got the relevant bits of info out early on in this discussion.

But then there was the backlash. No! It must not be! Souders said to do it! And there was the bad advice to use defer, not caring how it fails when it fails.

And then ironically, out of nowhere, there came a claim that h5bp users would be doing things properly. And this is very ironic because this comment came after comments from its supporters who evidently produce invalid markup and use a load of third party abstraction layers (and awful ones). And after the comment about using defer.

And so what does any of this have do with dhtmlkitchen.com being down? Nothing at all, obviously. That was just a weak jab from an h5bp forker who can't stand to hear criticism.

jashkenas got the relevant bits of info out early on in this discussion.

But then there was the backlash. No! It must not be! Souders said to do it! And there was the bad advice to use defer, not caring how it fails when it fails.

And then ironically, out of nowhere, there came a claim that h5bp users would be doing things properly. And this is very ironic because this comment came after comments from its supporters who evidently produce invalid markup and use a load of third party abstraction layers (and awful ones). And after the comment about using defer.

And so what does any of this have do with dhtmlkitchen.com being down? Nothing at all, obviously. That was just a weak jab from an h5bp forker who can't stand to hear criticism.

@BroDotJS

This comment has been minimized.

Show comment
Hide comment
@BroDotJS

BroDotJS Aug 17, 2011

Bros.
Dude.
Bros.

This thread is closed. Remember? You don't have to go home, but you can't flame here.

Bros.
Dude.
Bros.

This thread is closed. Remember? You don't have to go home, but you can't flame here.

@geddski

This comment has been minimized.

Show comment
Hide comment
@geddski

geddski Aug 17, 2011

Hey y'all remember that one time when we made an epic thread where there were multiple debates, personal flame wars, people getting angry all over the place, an obscene image or two, and an all-around good time? Can't believe it was free. We should do that again sometime.

geddski commented Aug 17, 2011

Hey y'all remember that one time when we made an epic thread where there were multiple debates, personal flame wars, people getting angry all over the place, an obscene image or two, and an all-around good time? Can't believe it was free. We should do that again sometime.

sengeezer pushed a commit to sengeezer/html5-boilerplate that referenced this issue Apr 16, 2012

Merge pull request #28 from bholtsclaw/patch-1
Updated the readme to reflect the availability of Java for older PPC bas...

potench added a commit to ff0000/red-boilerplate-legacy that referenced this issue Apr 19, 2012

Merge pull request #28 from ff0000/feature/simple-events
Feature/simple events
Since no one +1'd or -1'd this is it safe to assume this is fine?

briankelleher pushed a commit to briankelleher/html5-boilerplate that referenced this issue Jun 3, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment