Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account (pre-rewrite) audit #7

addyosmani opened this issue Nov 11, 2016 · 2 comments
Open (pre-rewrite) audit #7

addyosmani opened this issue Nov 11, 2016 · 2 comments


Copy link

@addyosmani addyosmani commented Nov 11, 2016

Speeding up the NASA OSS frontend before we rewrote it

Audit from: September

In this audit, we're going to explore and speed-up up the front-end to the NASA open-source projects landing page. We noticed was particularly slow on desktop over cable and heavy enough to lock up Chrome on Android on a relatively fast Wifi connection. We observed lots of main-thread lockup with very large chunks of work (some of the heaviest frames taking 2500ms+ when we really want this closer to 16ms):

screenshot from 2016-09-07 16 20 43

and the Network waterfall indicated there were 433 requests in flight. That's one busy request party.

screenshot from 2016-09-07 16 22 58

There are four primary causes of slow-down in this app:

  • Lots of Angular 1 code where switching to onetime bindings and avoiding XHRs for partials would help
  • Heavy use of embeds loading third-party content (Github buttons) for every OSS project entry on the page (290+)
  • Lots of syncronous font loading (5+ fonts being pulled in from Google Fonts)
  • Heavy image assets


Let's begin with some low hanging fruit. We noticed the site had a few larger image assets. This was trivial to fix and a pass through ImageOptim saved us 20-80% (333KB overall) on our page weight:

Next up we noticed each repo entry on their landing page had a GitHub fork button embed on it:

screenshot from 2016-09-07 16 27 36

Each GitHub button embed loaded via an iframe introduces additional network overhead of a few seconds (2.6/2.9s in a few cases on cable).

screen shot 2016-09-03 at 9 50 09 am

Overall these are responsible for over 1500ms+ of activity on the main thread (including time for JS execution) that isn't really offering us that much value:

screenshot from 2016-09-07 16 40 22

This PR I landed dropping them in favor of folks clicking through to the repo link at the top of each listing instead: nasa/code-nasa-gov@fc44bee

screen shot 2016-09-03 at 9 48 08 am

The impact of this change is we also get to remove 92 additional requests from the current network waterfall:

screen shot 2016-09-03 at 9 51 30 am

which is but also nixes 100KB+ from the overall payload.

We head back into the network panel and what do we see. Well, images again. We've optimised them all but there's still this
really large background image that ideally would be <100KB:

screenshot from 2016-09-07 16 32 27

Let's fix that by running it through and we'll save 75% on the size of that asset: nasa/code-nasa-gov@ba96415

Looking through the network panel again we've got lots of web fonts. Whaa:

screenshot from 2016-09-07 16 44 37

We don't want to mess with the design of this page just yet but it definitely has too many fonts on there. Let's at least make sure we async load them in. If we switched to a Web Font Loader (like this) and used it asynchronously we would avoid blocking the page while loading in Angular and Bootstrap.

<link href='//' rel='stylesheet' type='text/css'>
<link href='//' rel='stylesheet' type='text/css'>
<link href='//' rel='stylesheet' type='text/css'>
<link href='//,600,700' rel='stylesheet' type='text/css'>
<link href='//' rel='stylesheet' type='text/css'>


Downside to this is if we use it asynchronously, the rest of the page might render before the Web Font Loader is loaded and executed, which might intro some FOUC. But for now, we're going to land it.

Next, let's dig in and see what can be done about the Angular 1 perf issues on the app. If we CPU profile it in DevTools we see that $digest cycles are taking over 2 seconds!.

screenshot from 2016-09-07 17 28 29

If we wanted to get framework specific here, we could use ng-idle-apply-timing, which tries to run the digest cycle without changing any data and collects the CPU profile. It measures how long dirty checking for each piece of data in our app takes (two-way data-binding, $watch expressions..things that add to the digest cycle time). Looking at this we see that one idle digest cycle takes up a second. Not great.

screenshot from 2016-09-07 17 29 55

We can also get some insight into how many watch expressions Angular is evaluating by running ng-count-watchers, which goes through each element's scope, summing the total number of found watchers. In our case it's over 10,000:

screenshot from 2016-09-07 17 37 35

This is unnecessary overhead as all of our Github repo entries are being loaded in from a static JSON file (the source for data in this app) and aren't going to change.

screenshot from 2016-09-07 17 43 22

We can greatly simplify this by switching from two-way to one-way bindings in our code instead. Sam did this over here: nasa/code-nasa-gov@8b2d7e7 and also bumped us up to the latest version of Angular so the page would benefit from any performance improvements that landed since the page was created.

       <div ng-repeat="project in filtered = (catalog | orderBy:'Update_Date':true | filter:search)" class="box" ng-show="project.tagFilter">
          <div class="title">
            <div class="title1">
            <h2>{{:: project.Software}}</h2>

Our improvements up until this point have been deployed to on Thursday 8th Sept. We're going to keep hacking on perf after this.

Update September 16th

The current version of the front-end didn't have a build step and there were places where it still used JavaScript to fetch a lot of partials, like the templates for entries, sharing and the license (taking 2s to fetch and 630ms+ to parse/execute on a fast Linux desktop):

screenshot from 2016-09-07 16 51 41

The site also doesn't currently concat any of the scripts or stylesheets, contributing to that high network request count. Introducing a simple build step here would allow us to start cutting away at some more low-hanging fruit. I landed a PR that added the following to try ameliorating the issue:

  • Angular template caching for partials (no more XHRs required for any templates)
  • Move scripts from <head> and switch to asynchronously loading them in
  • Gulp build process (can ES2015 later) for script concatenation and minification
  • Vendor scripts (Angular, Bootstrap etc) are localised to enable concatenation

This takes the work that was required here:


and reduces it to:


saving on a few hundred milliseconds on the aggregate time for these two blocks of work:

To be clear, there's still a lot of work being done here. The change is we've switched to a single 319KB bundle for all script rather than 7 network requests for scripts and up to 11 additional requests for HTML partials.

Room for improvement:
We could get more granular here. We could only use template caching for partials needed for the initial view and continue XHRing in partials for Guide and Share.

Update November 11th

After reviewing the bottlenecks getting Angular 1 fast enough on mobile devices, we decided to completely rewrite the front-end using the PRPL pattern with Polymer (it made doing the rewrite using code-splitting techniques super easy and enabled us to ship in under a week). We'll be sharing more details on the implementation in the near future.

Copy link

@sgelob sgelob commented Nov 12, 2016

Thank you for this audit. Please, take a look at the website and the 3MB JavaScript file ( they are loading in not async way and not GZIPed.

Copy link

@AdriVanHoudt AdriVanHoudt commented Nov 13, 2016

👌 but would have loved to see a full on perf audit with Angular 1 since a lot of people are still using this (like me ^^)

@addyosmani addyosmani changed the title (initial version) audit (pre-rewrite) audit Mar 2, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants