Skip to content

GitHub Desktop now has a dark side

GitHub Desktop on Windows is a nice complement to developer tools such as Atom and Visual Studio. Now it visually complements those tools too! The latest update adds the ability to select a new dark theme.

GitHub Desktop with dark theme enabled

You can access this setting from the Options menu in GitHub Desktop.

GitHub Pages to upgrade to Jekyll 3.1.4

GitHub Pages will upgrade to the soon-to-be-released Jekyll 3.1.4 on May 23rd. The Jekyll 3.1.x branch brings significant performance improvements to the build process, adds a handful of helpful Liquid filters, and fixes a few minor bugs.

This should be a seamless transition for all GitHub Pages users, but if you have a particularly complex Jekyll site, we recommend building your site locally with the latest version of Jekyll 3.1.x prior to May 18th to ensure your site continues to build as expected.

For more information, see the Jekyll changelog and if you have any questions, we encourage you to get in touch with us.

Edit: To ensure a smooth transition for users, the upgrade has been rescheduled to May 23rd.

Expanded webhook events

Webhooks are one of the more powerful ways to extend GitHub. They allow internal tools and third-party integrations to subscribe to specific activity on GitHub and receive notifications (via an HTTP POST) to an external web server when those events happen. They are often used to trigger CI builds, deploy applications, or update external bug trackers.

Based on your feedback, we've expanded the kinds of events to which you can subscribe. New events include:

  • Editing an issue or pull request's title or body
  • Changing a repository's visibility to public or private
  • Deleting a repository
  • Editing an issue comment, pull request comment, or review comment
  • Deleting an issue comment, pull request comment, or review comment

When the action in question was an edit, the webhook's payload will helpfully point out what changed.

changed payload

These expanded webhook events are now available on For more information, check out the developer blog and take a look at the documentation for a full list of webhook events.

Delivering Octicons with SVG no longer delivers its icons via icon font. Instead, we’ve replaced all the Octicons throughout our codebase with SVG alternatives. While the changes are mostly under-the-hood, you’ll immediately feel the benefits of the SVG icons.

Octicon comparison

Switching to SVG renders our icons as images instead of text, locking nicely to whole pixel values at any resolution. Compare the zoomed-in icon font version on the left with the crisp SVG version on the right.

Why SVG?

Icon font rendering issues

Icon fonts have always been a hack. We originally used a custom font with our icons as unicode symbols. This allowed us to include our icon font in our CSS bundle. Simply adding a class to any element would make our icons appear. We could then change the size and color on the fly using only CSS.

Unfortunately, even though these icons were vector shapes, they’d often render poorly on 1x displays. In Webkit-based browsers, you’d get blurry icons depending on the browser’s window width. Since our icons were delivered as text, sub-pixel rendering meant to improve text legibility actually made our icons look much worse.

Page rendering improvements

Since our SVG is injected directly into the markup (more on why we used this approach in a bit), we no longer see a flash of unstyled content as the icon font is downloaded, cached, and rendered.



As laid out in Death to Icon Fonts, some users override GitHub’s fonts. For dyslexics, certain typefaces can be more readable. To those changing their fonts, our font-based icons were rendered as empty squares. This messed up GitHub’s page layouts and didn’t provide any meaning. SVGs will display regardless of font overrides. For screen readers, SVG provides us the ability to add pronouncable alt attributes, or leave them off entirely.

Properly sized glyphs

For each icon, we currently serve a single glyph at all sizes. Since the loading of our site is dependent on the download of our icon font, we were forced to limit the icon set to just the essential 16px shapes. This led to some concessions on the visuals of each symbol since we’d optimized for the 16px grid. When scaling our icons up in blankslates or marketing pages, we’re still showing the 16px version of the icon. With SVGs, we can easily fork the entire icon set and offer more appropriate glyphs at any size we specify. We could have done this with our icon fonts, but then our users would need to download twice as much data. Possibly more.

Ease of authoring

Building custom fonts is hard. A few web apps have popped up to solve this pain. Internally, we’d built our own. With SVG, adding a new icon could be as trivial as dragging another SVG file into a directory.

We can animate them

We’re not saying we should, but we could, though SVG animation does have some practical applications—preloader animations, for example.


Our Octicons appear nearly 2500 times throughout GitHub’s codebase. Prior to SVG, Octicons were included as simple spans <span class="octicon octicon-alert"></span>. To switch to SVG, we first added a Rails helper for injecting SVG paths directly into to our markup. Relying on the helper allowed us to test various methods of delivering SVG to our staff before enabling it for the public. Should a better alternative to SVG come along, or if we need to revert back to icon fonts for any reason, we’d only have to change the output of the helper.

Helper usage

Input <%= octicon(:symbol => "plus") %>


<svg aria-hidden="true" class="octicon octicon-plus" width="12" height="16" role="img" version="1.1" viewBox="0 0 12 16">
    <path d="M12 9H7v5H5V9H0V7h5V2h2v5h5v2z"></path>

Our approach

You can see we’ve landed on directly injecting the SVGs directly in our page markup. This allows us the flexibility to change the color of the icons with CSS using the fill: declaration on the fly.

Instead of an icon font, we now have a directory of SVG shapes whose paths are directly injected into the markup by our helper based on which symbol we choose. For example, if we want an alert icon, we call the helper <%= octicon(:symbol => "alert") %>. It looks for the icon of the same file name and injects the SVG.

We tried a number of approaches when adding SVG icons to our pages. Given the constraints of GitHub’s production environment, some were dead-ends.

  1. External .svg — We first attempted to serve a single external “svgstore”. We’d include individual sprites using the <use> element. With our current cross-domain security policy and asset pipeline, we found it difficult to serve the SVG sprites externally.
  2. SVG background images — This wouldn’t let us color our icons on the fly.
  3. SVGs linked via <img> and the src attribute — This wouldn’t let us color our icons on the fly.
  4. Embedding the entire “svgstore” in every view and using <use> — It just didn’t feel quite right to embed every SVG shape we have on every single page throughout especially if a page didn’t include a single icon.


We’ve found there were no adverse effects on pageload or performance when switching to SVG. We’d hoped for a more dramatic drop in rendering times, but often performance has more to do with perception. Since SVG icons are being rendered like images in the page with defined widths and heights, the page doesn’t have nearly as much jank.

We were also able to kill a bit of bloat from our CSS bundles since we’re no longer serving the font CSS.

Drawbacks & Gotchas

  • Firefox still has pixel-rounding errors in SVG, though the icon font had the same issue.
  • You may have to wrap these SVGs with another div if you want to give them a background color.
  • Since SVG is being delivered as an image, some CSS overrides might need to be considered. If you see anything weird in our layouts, let us know.
  • Internet Explorer needs defined width and height attributes on the svg element in order for them to be sized correctly.
  • We were serving both SVG and our icon font during our transition. This would cause IE to crash while we were still applying font-family to each of the SVG icons. This was cleared up as soon as we transitioned fully to SVG.


By switching from icon fonts, we can serve our icons more easily, more quickly, and more accessibly. And they look better. Enjoy.

Two years of bounties

Despite the best efforts of its writers, software has vulnerabilities, and GitHub is no exception. Finding, fixing, and learning from past bugs is a critical part of keeping our users and their data safe on the Internet. Two years ago, we launched the GitHub Security Bug Bounty and it's been an incredible success. By rewarding the talented and dedicated researchers in the security industry, we discover and fix security vulnerabilities before they can be exploited.

Bugs squashed

Bounty Submissions Per Week

Of 7,050 submissions in the past two years, 1,772 warranted further review, helping us to identify and fix vulnerabilities spanning all of the OWASP top 10 vulnerability classifications. 58 unique researchers earned a cumulative $95,300 for the 102 medium to high risk vulnerabilities they reported. This chart shows the breakdown of payouts by severity and OWASP classification:

A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 A0 Sum
Low 4 5 11.5 1 8 11 5 7 3 3.5 1 60
Medium 2 1 12 0 1 1 5 1 0 3 0 26
High 2 2 3.5 0 2 1 0 0.5 0 0 2 13
Critical 3 0 0 0 0 0 0 0 0 0 0 3
Sum 11 8 27 1 11 13 10 8.5 4 6.5 3 102


We love it when a reported vulnerability ends up not being our fault. @kelunik and @bwoebi reported a browser vulnerability, causing GitHub's cookies to be sent to other domains. @ealf reported a browser bug, bypassing our JavaScript same-origin policy checks. We were able to protect our users from these vulnerabilities months before the browser vendors released patches.

Another surprising bug was reported by @cryptosense, who found that some RSA key generators were creating SSH keys that were trivially factorable. We ended up finding and revoking 309 weak RSA keys and now have validations checking if keys are factorable by the first 10,000 primes.

In the first year of the bounty program, we saw reports mostly about our web services. In 2015, we received a number of reports for vulnerabilities in our desktop apps. @tunz reported a clever exploit against GitHub for Mac, allowing remote code execution. Shortly thereafter, @joernchen reported a similar bug in GitHub for Windows, following up a few months later with a separate client-side remote code execution vulnerability in Git Large File Storage (LFS).

Hacking with purpose

In 2015 we saw an amazing increase in the number of bounties donated to a good cause. GitHub matches bounties donated to 501(c)(3) organizations, and with the help of our researchers we contributed to the EFF, Médecins Sans Frontières, the Ada Initiative, the Washington State Burn Foundation, and the Tor Project. A big thanks to @ealf, @LukasReschke, @arirubinstein, @cryptosense, @bureado, @vito, and @s-rah for their generosity.

Get involved

In the first two years of the program, we paid researchers nearly $100,000. That's a great start, but we hope to further increase participation in the program. So, fire up your favorite proxy and start poking at When you find a vulnerability, report it and join the ranks of our leaderboard. Happy hacking!

January 28th Incident Report

Last week GitHub was unavailable for two hours and six minutes. We understand how much you rely on GitHub and consider the availability of our service one of the core features we offer. Over the last eight years we have made considerable progress towards ensuring that you can depend on GitHub to be there for you and for developers worldwide, but a week ago we failed to maintain the level of uptime you rightfully expect. We are deeply sorry for this, and would like to share with you the events that took place and the steps we’re taking to ensure you're able to access GitHub.

The Event

At 00:23am UTC on Thursday, January 28th, 2016 (4:23pm PST, Wednesday, January 27th) our primary data center experienced a brief disruption in the systems that supply power to our servers and equipment. Slightly over 25% of our servers and several network devices rebooted as a result. This left our infrastructure in a partially operational state and generated alerts to multiple on-call engineers. Our load balancing equipment and a large number of our frontend applications servers were unaffected, but the systems they depend on to service your requests were unavailable. In response, our application began to deliver HTTP 503 response codes, which carry the unicorn image you see on our error page.

Our early response to the event was complicated by the fact that many of our ChatOps systems were on servers that had rebooted. We do have redundancy built into our ChatOps systems, but this failure still caused some amount of confusion and delay at the very beginning of our response. One of the biggest customer-facing effects of this delay was that wasn't set to status red until 00:32am UTC, eight minutes after the site became inaccessible. We consider this to be an unacceptably long delay, and will ensure faster communication to our users in the future.

Initial notifications for unreachable servers and a spike in exceptions related to Redis connectivity directed our team to investigate a possible outage in our internal network. We also saw an increase in connection attempts that pointed to network problems. While later investigation revealed that a DDoS attack was not the underlying problem, we spent time early on bringing up DDoS defenses and investigating network health. Because we have experience mitigating DDoS attacks, our response procedure is now habit and we are pleased we could act quickly and confidently without distracting other efforts to resolve the incident.

With our DDoS shields up, the response team began to methodically inspect our infrastructure and correlate these findings back to the initial outage alerts. The inability to reach all members of several Redis clusters led us to investigate uptime for devices across the facility. We discovered that some servers were reporting uptime of several minutes, but our network equipment was reporting uptimes that revealed they had not rebooted. Using this, we determined that all of the offline servers shared the same hardware class, and the ones that rebooted without issue were a different hardware class. The affected servers spanned many racks and rows in our data center, which resulted in several clusters experiencing reboots of all of their member servers, despite the clusters' members being distributed across different racks.

As the minutes ticked by, we noticed that our application processes were not starting up as expected. Engineers began taking a look at the process table and logs on our application servers. These explained that the lack of backend capacity was a result of processes failing to start due to our Redis clusters being offline. We had inadvertently added a hard dependency on our Redis cluster being available within the boot path of our application code.

By this point, we had a fairly clear picture of what was required to restore service and began working towards that end. We needed to repair our servers that were not booting, and we needed to get our Redis clusters back up to allow our application processes to restart. Remote access console screenshots from the failed hardware showed boot failures because the physical drives were no longer recognized. One group of engineers split off to work with the on-site facilities technicians to bring these servers back online by draining the flea power to bring them up from a cold state so the disks would be visible. Another group began rebuilding the affected Redis clusters on alternate hardware. These efforts were complicated by a number of crucial internal systems residing on the offline hardware. This made provisioning new servers more difficult.

Once the Redis cluster data was restored onto standby equipment, we were able to bring the Redis-server processes back online. Internal checks showed application processes recovering, and a healthy response from the application servers allowed our HAProxy load balancers to return these servers to the backend server pool. After verifying site operation, the maintenance page was removed and we moved to status yellow. This occurred two hours and six minutes after the initial outage.

The following hours were spent confirming that all systems were performing normally, and verifying there was no data loss from this incident. We are grateful that much of the disaster mitigation work put in place by our engineers was successful in guaranteeing that all of your code, issues, pull requests, and other critical data remained safe and secure.

Future Work

Complex systems are defined by the interaction of many discrete components working together to achieve an end result. Understanding the dependencies for each component in a complex system is important, but unless these dependencies are rigorously tested it is possible for systems to fail in unique and novel ways. Over the past week, we have devoted significant time and effort towards understanding the nature of the cascading failure which led to GitHub being unavailable for over two hours. We don’t believe it is possible to fully prevent the events that resulted in a large part of our infrastructure losing power, but we can take steps to ensure recovery occurs in a fast and reliable manner. We can also take steps to mitigate the negative impact of these events on our users.

We identified the hardware issue resulting in servers being unable to view their own drives after power-cycling as a known firmware issue that we are updating across our fleet. Updating our tooling to automatically open issues for the team when new firmware updates are available will force us to review the changelogs against our environment.

We will be updating our application’s test suite to explicitly ensure that our application processes start even when certain external systems are unavailable and we are improving our circuit breakers so we can gracefully degrade functionality when these backend services are down. Obviously there are limits to this approach and there exists a minimum set of requirements needed to serve requests, but we can be more aggressive in paring down the list of these dependencies.

We are reviewing the availability requirements of our internal systems that are responsible for crucial operations tasks such as provisioning new servers so that they are on-par with our user facing systems. Ultimately, if these systems are required to recover from an unexpected outage situation, they must be as reliable as the system being recovered.

A number of less technical improvements are also being implemented. Strengthening our cross-team communications would have shaved minutes off the recovery time. Predefining escalation strategies during situations that require all hands on deck would have enabled our incident coordinators to spend more time managing recovery efforts and less time navigating documentation. Improving our messaging to you during this event would have helped you better understand what was happening and set expectations about when you could expect future updates.

In Conclusion

We realize how important GitHub is to the workflows that enable your projects and businesses to succeed. All of us at GitHub would like to apologize for the impact of this outage. We will continue to analyze the events leading up to this incident and the steps we took to restore service. This work will guide us as we improve the systems and processes that power GitHub.

Update on 1/28 service outage

On Thursday, January 28, 2016 at 00:23am UTC, we experienced a severe service outage that impacted We know that any disruption in our service can impact your development workflow, and are truly sorry for the outage. While our engineers are investigating the full scope of the incident, I wanted to quickly share an update on the situation with you.

A brief power disruption at our primary data center caused a cascading failure that impacted several services critical to's operation. While we worked to recover service, was unavailable for two hours and six minutes. Service was fully restored at 02:29am UTC. Last night we completed the final procedure to fully restore our power infrastructure.

Millions of people and businesses depend on GitHub. We know that our community feels the effects of our site going down deeply. We’re actively taking measures to improve our resilience and response time, and will share details from these investigations.

GitHub implements Subresource Integrity

With Subresource Integrity (SRI), using GitHub is safer than ever. SRI tells your browser to double check that our Content Delivery Network (CDN) is sending the right JavaScript and CSS to your browser. Without SRI, an attacker who is able to compromise our CDN could send malicious JavaScript to your browser. To get the benefits of SRI, make sure you're using a modern browser like Google Chrome.

New browser security features like SRI are making the web a safer place. They don't do much good if websites don't implement them though. We're playing our role, and encourage you to consider doing the same.

You can read more about Subresource Integrity and why we implemented it on the GitHub Engineering blog.

GitHub Desktop is now available

The new GitHub Desktop is now available. It's a fast, easy way to contribute to projects from OS X and Windows. Whether you're new to GitHub or a seasoned user, GitHub Desktop is designed to simplify essential steps in your GitHub workflow and replace GitHub for Mac and Windows with a unified experience across both platforms.


Branch off

Branches are essential to proposing changes and reviewing code on GitHub, and they’re always available in GitHub Desktop’s repository view. Just select the current branch to switch branches or create a new one.


Craft the perfect commit by selecting the files—or even the specific lines—that make up a change directly from a diff. You can commit your changes or open a pull request without leaving GitHub Desktop or using the command line.

Merge and Deploy

Browse commits on local and remote branches to quickly and clearly see what changes still need to be merged. You can also merge your code to the master branch for deployment right from the app.


Ready to start collaborating? Download GitHub Desktop. If you're using GitHub for Mac or Windows, the upgrade is automatic.

Adopting the Open Code of Conduct

Update: TODO Group will not be continuing work on the open code of conduct. See the followup post for more information.

We are proud to be working with the TODO Group on the Open Code of Conduct, an easy-to-reuse code of conduct for open source communities. We have adopted the Open Code of Conduct for the open source projects that we maintain, including Atom, Electron, Git LFS, and many others. The Open Code of Conduct does not apply to all activity on GitHub, and it does not alter GitHub's Terms of Service.

Open source software is used by everyone, and we believe open source communities should be a welcoming place for all participants. By adopting and upholding a Code of Conduct, communities can communicate their values, establish a baseline for how people are expected to treat each other, and outline a process for dealing with unwelcome behavior when it arises.

The Open Code of Conduct is inspired by the code of conducts and diversity statements from several other communities, including Django, Python, Ubuntu, Contributor Covenant, and Geek Feminism. These communities are leaders in making open source welcoming to everyone.

If your project doesn't already have a code of conduct, then we encourage you to check out the Open Code of Conduct and consider if your community can commit to upholding it. Read more about it on the TODO Group blog.

GitHub Extension for Visual Studio is open source

Last April we released the GitHub Extension for Visual Studio, which lets you work on GitHub repositories in Visual Studio 2015. To celebrate Microsoft's final release of Visual Studio 2015, we're making the GitHub Extension for Visual Studio open source under the MIT License.

We'd like to thank Microsoft for their help and support in the development of the GitHub extension. In addition, this project wouldn't have been possible without open source tools, libraries, and assorted projects that are publicly available. We look forward to contributing back to the community and helping other developers leverage our work to create their own extensions for Visual Studio.

Download the GitHub Extension for Visual Studio now to see it in action. To file an issue or contribute to the project, head on over to the repository. We look forward to your pull requests!

Read-only deploy keys

You can now create deploy keys with read-only access. A deploy key is an SSH key that is stored on your server and grants access to a single GitHub repository. They are often used to clone repositories during deploys or continuous integration runs. Deploys sometimes involve merging branches and pushing code, so deploy keys have always allowed both read and write access. Because write access is undesirable in many cases, you now have the ability to create deploy keys with read-only access.

viewing and adding deploy keys

New deploy keys created through will be read-only by default and can be given write access by selecting "Allow write access" during creation. Access level can be specified when creating deploy keys from the API as well.

The GitHub Engineering Blog

We are happy to introduce GitHub's Engineering Blog to the world. Starting today, you can read details about our infrastructure, learn about our development practices, and hear about the knowledge we've gained while running the world's largest code collaboration platform.

You can also get updates by following our Engineering Twitter account @GitHubEng.

Happy reading!

Eight lessons learned hacking on GitHub Pages for six months

Believe it or not, just over a year ago, GitHub Pages, the documentation hosting service that powers nearly three-quarters of a million sites, was little more than a 100-line shell script. Today, it's a fully independent, feature-rich OAuth application that effortlessly handles well over a quarter million requests per minute. We wanted to take a look back at what we learned from leveling up the service over a six month period.

What's GitHub Pages

GitHub Pages is GitHub's static-site hosting service. It’s used by government agencies like the White House to publish policy, by big companies like Microsoft, IBM, and Netflix to showcase their open source efforts, and by popular projects like Bootstrap, D3, and Leaflet to host their software documentation. Whenever you push to a specially named branch of your repository, the content is run through the Jekyll static site generator, and served via its own domain.

Eating our own ice cream

At GitHub, we're a big fan of eating our own ice cream (some call it dogfooding). Many of us have our own, personal sites hosted on GitHub Pages, and many GitHub-maintained projects like Hubot and Electron, along with sites like, take advantage of the service as well. This means that when the product slips below our own heightened expectations, we're the first to notice.

We like to say that there's a Venn diagram of things that each of us are passionate about, and things that are important to GitHub. Whenever there's significant overlap, it's win-win, and GitHubbers are encouraged to find time to pursue their passions. The recent improvements to GitHub Pages, a six-month sprint by a handful of Hubbers, was one such project. Here's a quick look back at eight lessons we learned:

Lesson one: Test, test, and then test again

Before touching a single line of code, the first thing we did was create integration tests to mimic and validate the functionality experienced by users. This included things you might expect, like making sure a user's site built without throwing an error, but also specific features like supporting different flavors of Markdown rendering or syntax highlighting.

This meant that as we made radical changes to the code base, like replacing the shell script with a fully-fledged Ruby app, we could move quickly with confidence that everyday users wouldn't notice the change. And as we added new features, we continued to do the same thing, relying heavily on unit and integration tests, backed by real-world examples (fixtures) to validate each iteration. Like the rest of GitHub, nothing got deployed unless all tests were green.

Lesson two: Use public APIs, and when they don't exist, build them

One of our goals was to push the Pages infrastructure outside the GitHub firewall, such that it could function like any third-party service. Today, if you view your OAuth application settings you'll notice an entry for GitHub Pages. Internally, we use the same public-facing Git clone endpoints to grab your site's content that you use to push it, and the same public-facing repository API endpoints to grab repository metadata that you might use to build locally.

For us, that meant adding a few public APIs, like the inbound Pages API and outbound PageBuildEvent webhook. There's a few reasons why we chose to use exclusively public APIs and to deny ourselves access to "the secret sauce". For one, security and simplicity. Hitting public facing endpoints with untrusted user content meant all page build requests were routed through existing permission mechanisms. When you trigger a page build, we build the site as you, not as GitHub. Second, if we want to encourage a strong ecosystem of tools and services, we need to ensure the integration points are sufficient to do just that, and there's no better way to do that than to put your code where your mouth is.

Lesson three: Let the user make the breaking change

Developing a service is vastly different than developing an open source project. When you're developing a software project, you have the luxury of semantic versioning and can implement radical, breaking changes without regret, as users can upgrade to the next major version at their convenience (and thus ensure their own implementation doesn't break before doing so). With services, that's not the case. If we implement a change that's not backwards compatible, hundreds of thousands of sites will fail to build on their next push.

We made several breaking changes. For one, the Jekyll 2.x upgrade switched the default Markdown engine, meaning if users didn't specify a preference, we chose one for them, and that choice had to change. In order to minimize this burden, we decided it was best for the user, not GitHub, to make the breaking change. After all, there's nothing more frustrating than somebody else "messing with your stuff".

For months leading up to the Jekyll 2.x upgrade users who didn't specify a Markdown processor would get an email on each push, letting them know that Maruku was going the way of the dodo, and that they should upgrade to Kramdown, the new default, at their convenience. There were some pain points, to be sure, but it's preferable to set an hour aside to perform the switch and verify the output locally, rather than pushing a minor change, only to find your entire site won't publish and hours of frustration as you try to diagnose the issue.

Lesson four: In every communication, provide an out

We made a big push to improve the way we communicated with GitHub Pages users. First, we began pushing descriptive error messages when users' builds failed, rather than an unhelpful "page build failed" error, which would require the user to either build the site locally or email GitHub support for additional context. Each error message let you know exactly what happened, and exactly what you needed to do to fix it. Most importantly, each error included a link to a help article specific to the error you received.

Errors were a big step, but still weren't a great experience. We wanted to prevent errors before they occurred. We created the GitHub Pages Health Check and silently ran automated checks for common DNS misconfigurations on each build. If your site's DNS wasn't optimally configured, such as being pointed to a deprecated IP address, we'd let you know before it became a problem.

Finally, we wanted to level up our documentation to prevent the misconfiguration in the first place. In addition to overhauling all our GitHub Pages help documentation, we reimagined as a tutorial quick-start, lowering the barrier for getting started with GitHub Pages from hours to minutes, and published a list of dependencies, and what version was being used in production.

This meant that every time you got a communication from us, be it an error, a warning, or just a question, you'd immediately know what to do next.

Lesson five: Optimize for your ideal use case, not the most common

While GitHub Pages is used for all sorts of crazy things, the service is all about creating beautiful user, organization, and project pages to showcase your open source efforts on GitHub. Lots of users were doing just that, but ironically, it used to be really difficult to do so. For example, to list your open source projects on an organization site, you'd have to make dozens of client-side API calls, and hope your visitor didn't hit the API limit, or leave the site while they waited for it to load.

We exposed repository and organization metadata to the page build process, not because it was the most commonly used feature, but because it was at the core of the product's use case. We wanted to make it easier to do the right thing — to create great software, and to tell the world about it. And we've seen a steady increase in open source marketing and showcase sites as a result.

Lesson six: Successful efforts are cross-team efforts

If we did our job right, you didn't notice a thing, but the GitHub Pages backend has been completely replaced. Whereas before, each build would occur in the same environment as part of a worker queue, today, each build occurs in its own Docker-backed sandbox. This ensured greater consistency (and security) between builds.

Getting there required a cross-team effort between the GitHub Pages, Importer, and Security teams to create Hoosegow, a Ruby Gem for executing untrusted Ruby code in a disposable Docker sandbox. No one team could have created it alone, nor would the solution have been as robust without the vastly different use cases, but both products and the end user experience are better as a result.

Lesson seven: Match user expectations, then exceed them

Expectations are a powerful force. Everywhere on GitHub you can expect @mentions and emoji to "just work". For historical reasons, that wasn't the case with GitHub Pages, and we got many confused support requests as a result. Rather than embark on an education campaign or otherwise go against user expectations, we implemented emoji and @mention support within Jekyll, ensuring an expectation-consistent experience regardless of what part of GitHub you were on.

The only thing better than meeting expectations is exceeding them. Traditionally, users expected about a ten to fifteen minute lag between the time a change was pushed and when that change would be published. Through our improvements, we were able to significantly speed up page builds internally, and by sending a purge request to our third-party CDN on each build, users could see changes reflected in under ten seconds in most cases.

Lesson eight: It makes business sense to support open source

Jekyll may have been originally created to power GitHub Pages, but since then, it has become its own independent open source project with its own priorities. GitHubbers have always been part of the Jekyll community, but if you look at the most recent activity, you'll notice a sharp uptick in contributions, and many new contributors from GitHub.

If you use open source, whether it's the core of your product or a component that you didn't have to write yourself, it's in your best interest to play an active role in supporting the open source community, ensuring the project has the resources it needs, and shaping its future. We've started "open source Fridays" here at GitHub, where the entire company takes a break from the day-to-day to give back to the open source community that makes GitHub possible. Today, despite their beginnings, GitHub Pages needs Jekyll, not the other way around.

The numbers

Throughout all these improvements, the number of GitHub Pages sites has grown exponential, with just shy of a three-quarters of a million user, organization, and project sites being hosted by GitHub Pages today.

GitHub Pages sites over time

But the number of sites tells only half the story. Day-to-day use of GitHub Pages has also seen similar exponential growth over the past three years, with about 20,000 successful site builds completing each day as users continuously push updates to their site's content.

GitHub Pages builds per day

Last, you'll notice that when we introduced page build warnings in mid-2014, to proactively warn users about potential misconfigurations, users took the opportunity to improve their sites, with the percentage of failed builds (and number of builds generating warnings) decreasing as we enter 2015.

GitHub Pages is a small but powerful service tied to every repository on GitHub. Deceivingly simple, I encourage you to create your first GitHub Pages site today, or if you're already a GitHub Pages expert, tune in this Saturday to level up your GitHub Pages game.

Happy publishing!

Large Scale DDoS Attack on

We are currently experiencing the largest DDoS (distributed denial of service) attack in's history. The attack began around 2AM UTC on Thursday, March 26, and involves a wide combination of attack vectors. These include every vector we've seen in previous attacks as well as some sophisticated new techniques that use the web browsers of unsuspecting, uninvolved people to flood with high levels of traffic. Based on reports we've received, we believe the intent of this attack is to convince us to remove a specific class of content.

We are completely focused on mitigating this attack. Our top priority is making sure is available to all our users while deflecting malicious traffic. Please watch our status site or follow @githubstatus on Twitter for real-time updates.

Something went wrong with that request. Please try again.