Exporting Your Organization Audit Log

The Organization audit log allows you to quickly review actions performed by members of your organization on GitHub. You may need to look for specific activity or even through your organization's entire audit log to help aid in legal cases or keep record of suspicious activity.

To do just that, you now have the tools to export your organization's audit log in either JSON or CSV format.

Audit log export

Improving the GitHub workflow for the Microsoft Community

At Microsoft Build 2015, we announced deep GitHub integration in Visual Studio 2015, along with GitHub Enterprise 2.2.0. This release will help developers who work with the Microsoft stack make GitHub Enterprise a seamless part of their existing workflow. If you'd prefer to skip the summary, you can see a full list of new features in the release notes. If you're interested in the highlights, read on.

GitHub Enterprise now supported on Hyper-V and available on Microsoft Azure

It's important to be able to deploy and run GitHub Enterprise wherever you want. If your team works on the Microsoft stack, we have great news. With the 2.2.0 release, you can now host GitHub Enterprise in the Windows ecosystem using Hyper-V for local hosting or Azure for cloud hosting.

Powerful Collaboration - GitHub Enterprise

To request a 45-day trial of GitHub Enterprise on Azure just let us know.

GitHub Extension for Visual Studio

The new GitHub Extension for Visual Studio lets you work on GitHub repositories within Visual Studio 2015. Once you download the latest version of Visual Studio, you can log in to GitHub, clone and create repositories, and publish your local work without leaving your IDE. To see a walkthrough of the features, check out this video on Microsoft's Channel 9.


Microsoft Developer Assistant

In case you missed it, Microsoft also announced the availability of the Microsoft Developer Assistant for Visual Studio 2015—a way for developers to search for code on GitHub.com from Visual Studio. Just enter your query and you will see links to public code on GitHub.com, along with information about the project.

Wait, there’s more!

Beyond the Microsoft integration you’ll find lots more to like in Enterprise 2.2.0 including:

  • PDF rendering
  • Mobile web notifications
  • Quick pull requests
  • Xen hypervisor support

For a full list of what’s new, check out the release notes.

If you already use GitHub Enterprise, you can download the latest release from enterprise.github.com.

If you are attending Build 2015 and want to learn more, visit the GitHub booth on the third floor.

Eight lessons learned hacking on GitHub Pages for six months

Believe it or not, just over a year ago, GitHub Pages, the documentation hosting service that powers nearly three-quarters of a million sites, was little more than a 100-line shell script. Today, it's a fully independent, feature-rich OAuth application that effortlessly handles well over a quarter million requests per minute. We wanted to take a look back at what we learned from leveling up the service over a six month period.

What's GitHub Pages

GitHub Pages is GitHub's static-site hosting service. It’s used by government agencies like the White House to publish policy, by big companies like Microsoft, IBM, and Netflix to showcase their open source efforts, and by popular projects like Bootstrap, D3, and Leaflet to host their software documentation. Whenever you push to a specially named branch of your repository, the content is run through the Jekyll static site generator, and served via its own domain.

Eating our own ice cream

At GitHub, we're a big fan of eating our own ice cream (some call it dogfooding). Many of us have our own, personal sites hosted on GitHub Pages, and many GitHub-maintained projects like Hubot and Electron, along with sites like help.github.com, take advantage of the service as well. This means that when the product slips below our own heightened expectations, we're the first to notice.

We like to say that there's a Venn diagram of things that each of us are passionate about, and things that are important to GitHub. Whenever there's significant overlap, it's win-win, and GitHubbers are encouraged to find time to pursue their passions. The recent improvements to GitHub Pages, a six-month sprint by a handful of Hubbers, was one such project. Here's a quick look back at eight lessons we learned:

Lesson one: Test, test, and then test again

Before touching a single line of code, the first thing we did was create integration tests to mimic and validate the functionality experienced by users. This included things you might expect, like making sure a user's site built without throwing an error, but also specific features like supporting different flavors of Markdown rendering or syntax highlighting.

This meant that as we made radical changes to the code base, like replacing the shell script with a fully-fledged Ruby app, we could move quickly with confidence that everyday users wouldn't notice the change. And as we added new features, we continued to do the same thing, relying heavily on unit and integration tests, backed by real-world examples (fixtures) to validate each iteration. Like the rest of GitHub, nothing got deployed unless all tests were green.

Lesson two: Use public APIs, and when they don't exist, build them

One of our goals was to push the Pages infrastructure outside the GitHub firewall, such that it could function like any third-party service. Today, if you view your OAuth application settings you'll notice an entry for GitHub Pages. Internally, we use the same public-facing Git clone endpoints to grab your site's content that you use to push it, and the same public-facing repository API endpoints to grab repository metadata that you might use to build locally.

For us, that meant adding a few public APIs, like the inbound Pages API and outbound PageBuildEvent webhook. There's a few reasons why we chose to use exclusively public APIs and to deny ourselves access to "the secret sauce". For one, security and simplicity. Hitting public facing endpoints with untrusted user content meant all page build requests were routed through existing permission mechanisms. When you trigger a page build, we build the site as you, not as GitHub. Second, if we want to encourage a strong ecosystem of tools and services, we need to ensure the integration points are sufficient to do just that, and there's no better way to do that than to put your code where your mouth is.

Lesson three: Let the user make the breaking change

Developing a service is vastly different than developing an open source project. When you're developing a software project, you have the luxury of semantic versioning and can implement radical, breaking changes without regret, as users can upgrade to the next major version at their convenience (and thus ensure their own implementation doesn't break before doing so). With services, that's not the case. If we implement a change that's not backwards compatible, hundreds of thousands of sites will fail to build on their next push.

We made several breaking changes. For one, the Jekyll 2.x upgrade switched the default Markdown engine, meaning if users didn't specify a preference, we chose one for them, and that choice had to change. In order to minimize this burden, we decided it was best for the user, not GitHub, to make the breaking change. After all, there's nothing more frustrating than somebody else "messing with your stuff".

For months leading up to the Jekyll 2.x upgrade users who didn't specify a Markdown processor would get an email on each push, letting them know that Maruku was going the way of the dodo, and that they should upgrade to Kramdown, the new default, at their convenience. There were some pain points, to be sure, but it's preferable to set an hour aside to perform the switch and verify the output locally, rather than pushing a minor change, only to find your entire site won't publish and hours of frustration as you try to diagnose the issue.

Lesson four: In every communication, provide an out

We made a big push to improve the way we communicated with GitHub Pages users. First, we began pushing descriptive error messages when users' builds failed, rather than an unhelpful "page build failed" error, which would require the user to either build the site locally or email GitHub support for additional context. Each error message let you know exactly what happened, and exactly what you needed to do to fix it. Most importantly, each error included a link to a help article specific to the error you received.

Errors were a big step, but still weren't a great experience. We wanted to prevent errors before they occurred. We created the GitHub Pages Health Check and silently ran automated checks for common DNS misconfigurations on each build. If your site's DNS wasn't optimally configured, such as being pointed to a deprecated IP address, we'd let you know before it became a problem.

Finally, we wanted to level up our documentation to prevent the misconfiguration in the first place. In addition to overhauling all our GitHub Pages help documentation, we reimagined pages.github.com as a tutorial quick-start, lowering the barrier for getting started with GitHub Pages from hours to minutes, and published a list of dependencies, and what version was being used in production.

This meant that every time you got a communication from us, be it an error, a warning, or just a question, you'd immediately know what to do next.

Lesson five: Optimize for your ideal use case, not the most common

While GitHub Pages is used for all sorts of crazy things, the service is all about creating beautiful user, organization, and project pages to showcase your open source efforts on GitHub. Lots of users were doing just that, but ironically, it used to be really difficult to do so. For example, to list your open source projects on an organization site, you'd have to make dozens of client-side API calls, and hope your visitor didn't hit the API limit, or leave the site while they waited for it to load.

We exposed repository and organization metadata to the page build process, not because it was the most commonly used feature, but because it was at the core of the product's use case. We wanted to make it easier to do the right thing — to create great software, and to tell the world about it. And we've seen a steady increase in open source marketing and showcase sites as a result.

Lesson six: Successful efforts are cross-team efforts

If we did our job right, you didn't notice a thing, but the GitHub Pages backend has been completely replaced. Whereas before, each build would occur in the same environment as part of a worker queue, today, each build occurs in its own Docker-backed sandbox. This ensured greater consistency (and security) between builds.

Getting there required a cross-team effort between the GitHub Pages, Importer, and Security teams to create Hoosegow, a Ruby Gem for executing untrusted Ruby code in a disposable Docker sandbox. No one team could have created it alone, nor would the solution have been as robust without the vastly different use cases, but both products and the end user experience are better as a result.

Lesson seven: Match user expectations, then exceed them

Expectations are a powerful force. Everywhere on GitHub you can expect @mentions and emoji to "just work". For historical reasons, that wasn't the case with GitHub Pages, and we got many confused support requests as a result. Rather than embark on an education campaign or otherwise go against user expectations, we implemented emoji and @mention support within Jekyll, ensuring an expectation-consistent experience regardless of what part of GitHub you were on.

The only thing better than meeting expectations is exceeding them. Traditionally, users expected about a ten to fifteen minute lag between the time a change was pushed and when that change would be published. Through our improvements, we were able to significantly speed up page builds internally, and by sending a purge request to our third-party CDN on each build, users could see changes reflected in under ten seconds in most cases.

Lesson eight: It makes business sense to support open source

Jekyll may have been originally created to power GitHub Pages, but since then, it has become its own independent open source project with its own priorities. GitHubbers have always been part of the Jekyll community, but if you look at the most recent activity, you'll notice a sharp uptick in contributions, and many new contributors from GitHub.

If you use open source, whether it's the core of your product or a component that you didn't have to write yourself, it's in your best interest to play an active role in supporting the open source community, ensuring the project has the resources it needs, and shaping its future. We've started "open source Fridays" here at GitHub, where the entire company takes a break from the day-to-day to give back to the open source community that makes GitHub possible. Today, despite their beginnings, GitHub Pages needs Jekyll, not the other way around.

The numbers

Throughout all these improvements, the number of GitHub Pages sites has grown exponential, with just shy of a three-quarters of a million user, organization, and project sites being hosted by GitHub Pages today.

GitHub Pages sites over time

But the number of sites tells only half the story. Day-to-day use of GitHub Pages has also seen similar exponential growth over the past three years, with about 20,000 successful site builds completing each day as users continuously push updates to their site's content.

GitHub Pages builds per day

Last, you'll notice that when we introduced page build warnings in mid-2014, to proactively warn users about potential misconfigurations, users took the opportunity to improve their sites, with the percentage of failed builds (and number of builds generating warnings) decreasing as we enter 2015.

GitHub Pages is a small but powerful service tied to every repository on GitHub. Deceivingly simple, I encourage you to create your first GitHub Pages site today, or if you're already a GitHub Pages expert, tune in this Saturday to level up your GitHub Pages game.

Happy publishing!

CodeConf 2015: Early Bird Tickets and Call for Proposals

codeconf header 1

CodeConf 2015, GitHub's premiere open source event, will take place on June 25-26 in Tennessee. We hope you'll join us for what is sure to be a special community experience at the Bell Tower, in the heart of downtown Nashville.

We're pleased to announce that CodeConf is accepting proposals for talks beginning today. For guidelines around submissions, please take a look at the detailed form. The call for proposals ends May 10th at 11:59pm PDT.

CodeConf is dedicated to amplifying new voices from the amazing open source community. We will feature thoughtful and compelling sessions that will leave all attendees thinking differently about the open source ecosystem. We will also be celebrating the unique American city of Nashville by featuring local cuisine and artists throughout the conference. CodeConf will culminate in a party at the historic Country Music Hall of Fame, only a few blocks away from the Bell Tower.

Get your early bird ticket now!. On May 25, ticket sales will increase by $100. Follow @codeconf on Twitter for regular updates on content, training sessions, and more.

Come with an open mind, and leave a better contributor.

Game Off III - Everyone's a Winner

Last month, we challenged you to fork a game repository and do something awesome with it based on our Tron-inspired theme, "the game has changed". Below are the submissions. They're all super fun and playable in your browser, so click around and enjoy.

And remember - while the contest has officially ended, the fun doesn't stop here. All of these games are open source. Read the code, fork the repository, and help improve them even further. Make them harder, make them easier, add more octocats, or put your own spin on them.

Now for some real user power...

Business Frog Jumps to Conclusions

Business Frog Jumps to Conclusions

Join Business Frog as he jumps through the dystopian world of software project management » view the source · play

Umbilicus Ascension

Umbilicus Ascension

A 4-player cooperative platformer where only 1 player can win » view the source · play

A Lighted Story

A Lighted Story

An HTML5 action game and interactive fiction » view the source · play



A 2D infinite musical platformer set in the dark » view the source · play

Floodgate Dungeon

Floodgate Dungeon

An infinite runner game set in a dungeon » view the source · play

Upstream Commit

Upstream Commit

Dodging branches may seem easy at first, but how long can you hold up as you approach terminal velocity? » view the source · play



A Tetris-like game where you have to collect code blocks and deploy them into applications » view the source · play



Avabranch has never been so much fun » view the source · play

Typing Knight

Typing Knight

A veggie-based clone of Fruit Ninja for your browser, where you type to slice » view the source · play



A 2D sci-fi platformer » view the source · play



Descend as many levels into the maze as possible without meeting your demise » view the source · play

Dig Deep

Dig Deep

Dig as deep as you can and collect as much gold as you can without getting killed » view the source · play

Board Free

Board Free

The classic SkiFree, but with snowboards » view the source · play

Pappu Pakia Fighter Cat

Pappu Pakia Fighter Cat

Nyan out of 10 cats prefer it » view the source · play



An Octocat and a jetpack. What's not to like? » view the source · play



Snake meets Tron » view the source · play

Flippy Cat

Flippy Cat

A clone of a clone of a Flappy Bird game, but with a twist » view the source · play

GitHub's 2014 Transparency Report

Like most online services, GitHub occasionally receives legal requests relating to user accounts and content, such as subpoenas or takedown notices. You may wonder how often we receive such requests or how we respond to them, and how they could potentially impact your projects. Transparency and trust are essential to GitHub and the open-source community, and we want to do more than just tell you how we respond to legal notices. In that spirit, here is our first transparency report on the user-related legal requests we received in 2014.

Types of Requests

We receive two categories of legal requests:

  1. Disclosure Requests — requests to disclose user information, which include:
  2. Takedown Requests — requests to remove or block user content, which include:

Disclosure Requests

Subpoenas, Court Orders, and Search Warrants

We occasionally receive legal papers, such as subpoenas, that require us to disclose non-public information about account holders or projects. Typically these requests come from law enforcement agencies, but they may also come from civil litigants or government agencies. You can see our Guidelines for Legal Requests of User Data to learn more about how we respond to these requests.

Since many of these requests involve ongoing criminal investigations, there are heightened privacy concerns around disclosing the requests themselves. Further, they may often be accompanied by a court order that actually forbids us from giving notice to the targeted account holder.

In light of these concerns, we do not publish subpoenas or other legal requests to disclose private information. Nonetheless, in the interest of transparency, we'd like to provide as much information about these requests as we can.

Subpoenas, Court Orders, and Search Warrants Received

In the data below, we have counted every official request we have received seeking disclosure of user data, regardless of whether we disclosed the information or not.

There are several reasons why information may not be disclosed in response to a legal request. It may be that we do not have the requested data. It may be that the request was too vague such that we could not identify the data, or that it was otherwise defective. Sometimes the requesting party may simply withdraw the request. Other times, the requesting party may revise and submit another one. In cases where one request was replaced with a second, revised request, we would count that as two separate requests received. However, if we responded only to the revision, we would count that only as having responded to one request.

  Information Request Totals.
  Total Requests: 10.
  Percentage of Requests Where Information Was Disclosed: 70%.
  Percentage of Disclosures Where Affected Users Were Provided Notice: 43%.

It is also our policy to provide notice to affected account holders whenever possible; however, as noted previously, we are often forbidden by law from providing notice to the account holder. The following chart shows the breakdown of how frequently we are actually allowed to provide notice to the affected account holders.

  Percentage of Requests Resulting in Disclosure and Notice.
  Nothing Disclosed: 30%.
  Some or All Requested Information Disclosed: 70%.
  Looking only at the cases where information was disclosed:
  Provided Notice Before Disclosure: 43%.
  Prohibited from Providing Notice: 57%.
Accounts Affected by Subpoenas, Court Orders, and Search Warrants

Some requests may seek information about more than one account. Of the ten information disclosure requests we received in 2014, only forty total accounts were affected. For comparison, forty accounts is only 0.0005% of the 8 million active accounts on GitHub as of December 2014.

Types of Subpoenas, Court Orders, and Search Warrants Received

In 2014, we only received a handful of subpoenas. We did not receive any court orders or search warrants requiring us to disclose user data:

  Types of Information Requests.
  Subpoeanas: 10.
  Court Orders: 0.
  Warrants: 0.

To help understand the difference between the numbers above:

  • Subpoenas include any legal process authorized by law but which does not require any prior judicial review, including grand jury subpoenas and attorney-issued subpoenas;
  • Court Orders include any order issued by a judge that are not search warrants, including court orders issued under the Electronic Communications Privacy Act or Mutual Legal Assistance Treaty orders; and
  • Search Warrants are orders issued by a judge, upon a showing of probable cause under the Fourth Amendment to the U.S. Constitution, and particularly describing the place to be searched and the data to be seized

As noted above, many of the requests we receive are related to criminal investigations. We may also receive subpoenas from individuals involved in civil litigation or government agencies, such as the Federal Trade Commission, conducting a civil investigation. The following pie charts show the breakdown of the different types of requests we received in 2014.

  Types of investigations leading to information requests.
  Criminal: 60%.
Civil: 40%.

  Types of subpoenas received in 2014.
  Grand Jury Subpoenas: 50%.
  FTC Subpoena: 20%.
  DMCA Subpoena: 10%.
  California State Court Subpoena: 10%.
  FBI Subpoena: 10%.

National Security Orders

There is another category of legal disclosure requests that we are not allowed to say much about. These include national security letters from law enforcement and orders from the Foreign Intelligence Surveillance Court. If one of these requests comes with a gag order—and they usually do—that not only prevents us from talking about the specifics of the request, but even the existence of the request itself. The courts are currently reviewing the constitutionality of these prior restraints on free speech, and GitHub supports the efforts to increase transparency in this area. Until such time, we are not even allowed to say if we've received zero of these reports—we can only report information about these types of requests in broad ranges:

  Total National Security Orders Received: 0 to 249.
  Total Number of Accounts Affected: 0 to 249.

Takedown Requests

Government Takedown Requests

In 2014, we started receiving a new kind of takedown request—requests from foreign governments to remove content. We evaluate such requests on a case-by-case basis; however, where content is deemed illegal under local laws, we may comply with such a request by blocking the content in that specific region.

Whenever we agree to comply with these requests, we are committed to providing transparency in at least two ways: by giving notice to the affected account holders, and also by posting the notices publicly. This is the approach we took, for example, when we were contacted last year by Roskomnadzor, the Russian Federal Service for Supervision of Communications, Information Technology and Mass Media. We reached out to each of the account holders to let them know we had received the request and, when we eventually blocked access to the content in Russia, we posted the notices to a public repository. Since that repository is public, anyone can view the notices to see what content was blocked. Here are the high-level numbers of content blocked in Russia:

  Roskomnadzor Notices Totals.
  Total Notices Processed: 3.
  Total Accounts Affected: 9.

To date, other than the Roskomnadzor notices, we have not blocked content at the request of any other foreign government. And because we are committed to transparency, if we agree to block content under similar circumstances in the future, we intend to follow the same protocol—providing notice to affected account holders and posting the requests publicly.

DMCA Takedown Notices

Many of the takedown requests we receive are notices submitted under the Digital Millenium Copyright Act, alleging that user content is infringing someone's copyright. Each time we receive a complete DMCA takedown notice, we redact any personal information and post it to a public repository.

DMCA Takedown Notices Received

Here are the total number of complete notices that we received and processed in 2014. In the case of takedown notices, this is the number of separate notices where we disabled content or asked our users to remove content:

  DMCA Totals.
  Takedown Notices: 258.
  Counter Notices or Retractions: 17.
  Notices of Legal Actions Filed: 0

  Total Number of DMCA Notices, Counter Notices and Retractions by Month
Incomplete DMCA Takedown Notices Received

From time to time, we receive incomplete notices regarding copyright infringement. When we do, we ask the submitting party to revise it to comply with the legal requirements. Usually they will respond with a revised notice, but occasionally, they may resolve the issue on their own without resubmitting a revised notice. We don't currently keep track of how many incomplete notices we receive, or how often folks are able to work out their issues without sending a takedown notice.

Projects Affected by DMCA Takedown Requests

We also tabulated the total number of projects (e.g., repositories, Gists, Pages sites) affected by each notice. Here is a graph showing the total number of affected projects by month:

  Total Number of Projects Affected by DMCA Notices, Counter Notices and Retractions by Month

Note, however, that on October 16, 2014 we made a change to our DMCA Policy that impacted that number. Before the policy change we would have counted each reported link to a repository as a single affected repository, even though it would have actually affected the whole network of forks. After the policy change, however, since we require the notices to specify whether any forks are infringing, the "affected" number should more accurately reflect the actual number of repositories implicated by the takedown notice. Though it is too early to properly gauge the effect of this change, we noticed that the average number of repositories listed on a takedown notice increased from 2.7 (for the period of Jan 1 - Oct 15) to 3.2 (for the period from Oct 15 to Dec 31). The median number of affected projects remained the same for both periods: 1.0.


We want to be as open as possible to help you understand how legal requests may affect your projects. So we will be releasing similar transparency reports each year. If you have any questions, suggestions, or other feedback, please contact us.

Announcing Git Large File Storage (LFS)

Distributed version control systems like Git have enabled new and powerful workflows, but they haven't always been practical for versioning large files. We're excited to announce Git Large File Storage (LFS) as an improved way to integrate large binary files such as audio samples, datasets, graphics, and videos into your Git workflow.

Git LFS is a new, open source extension that replaces large files with text pointers inside Git, while storing the file contents on a remote server like GitHub.com or GitHub Enterprise.


Git LFS is easy to download and configure, works on all major platforms, and is open sourced under the MIT license.

Early access to Git LFS support on GitHub.com

We're ready to roll out Git LFS support to a select group of users. If you'd like to be one of the first to try it out on GitHub.com, sign up for early access using your GitHub account.

In the future, every repository on GitHub.com will support Git LFS by default.


Every user and organization on GitHub.com with Git LFS enabled will begin with 1 GB of free file storage and a monthly bandwidth quota of 1 GB. If your workflow requires higher quotas, you can easily purchase more storage and bandwidth for your account.

Want to start working with large files on GitHub.com? Sign up for early access.

Git Merge 2015 Approaches!

gitmerge graphic

Next week, we'll converge in Paris to celebrate 10 years of Git. Thank you for helping us raise funds that benefit the Software Freedom Conservancy.

We look forward to hanging out in the beautiful La Gaîté Lyrique on April 8-9. We've got Git experts from across the industry coming to discuss the future of Git with you.

If you haven't already, check out the complete speaker lineup and schedule, featuring speakers from Google, Twitter, Atlassian, Microsoft and more. The conferenced will be hosted by GitHub's own Scott Chacon.

On Thursday night we'll gather at the stunning La Cartonnerie to wish Git a happy tenth birthday with cocktails, snacks and entertainment.

There are only a few tickets left, so if you'd like to register, now is the time.

Social Coding Shirts now available in the Shop

Do you remember your first open source project? This shirt with an early GitHub motto will take you back to that first commit.

Social Coding Shirts

Get them in the GitHub Shop

Large Scale DDoS Attack on github.com

We are currently experiencing the largest DDoS (distributed denial of service) attack in github.com's history. The attack began around 2AM UTC on Thursday, March 26, and involves a wide combination of attack vectors. These include every vector we've seen in previous attacks as well as some sophisticated new techniques that use the web browsers of unsuspecting, uninvolved people to flood github.com with high levels of traffic. Based on reports we've received, we believe the intent of this attack is to convince us to remove a specific class of content.

We are completely focused on mitigating this attack. Our top priority is making sure github.com is available to all our users while deflecting malicious traffic. Please watch our status site or follow @githubstatus on Twitter for real-time updates.

Navigate branches from your phone

Branches are an essential part of collaborating using GitHub Flow. And it's now easier than ever to browse a repository's branches on your phone.

Using the new dropdown, you can access the recently active branches for a project or browse through all of its branches.


Scheduled Maintenance - Saturday 3/21/2015 @ 12:00 UTC

This Saturday, March 21st, 2015 at 12PM UTC we will be upgrading a large portion of our database infrastructure in order to further ensure a fast and reliable GitHub experience.

To minimize risk to customer data, the site will enter maintenance mode while the upgrade is performed. HTTP, API, and Git access to GitHub.com will be unavailable during this window, which we estimate will last no longer than 15 minutes. During the maintenance we will update our MySQL Server version, as well as move a large portion of our data to an isolated cluster. This will improve scalability and help sustain the growth of our data.

We will update our status page and @githubstatus at the beginning of maintenance and again when the maintenance is completed.

Introducing mobile web notifications

Web notifications on GitHub keep you apprised of the latest activity from the repositories you watch within your browser. With the addition of mobile web notifications, now you can stay up to date from your phone.

If you already use web notifications, you'll see a familiar indicator in the top right of every page whenever you have unread activity.

Mobile notifications

Use the switcher at the top of the page to filter your notifications. By default we show all your unread activity across the repositories you watch, but filtering to a specific repository—or even just the threads you're participating in—is just a couple taps away.

Switch contexts

When you want to skip a notification, you can always mark it as read. Tap the checkmark on the right of individual notifications and they're immediately updated. You can also use the link in each repository group's header to mark multiple notifications as read.

New to web or email notifications on GitHub? Head to your account settings to customize how and where you receive notifications for the repositories you watch.

PDF Viewing

We've been displaying 3D, map, and tabular files for a while now. We're now happy to add PDF documents to the list!

PDF being rendered

Simply browse to a PDF document and we'll render it in your browser like any other file. From presentations to papers, we've got you covered. Many thanks to Mozilla and every contributor to PDF.js. If you have any further questions, check out the help article.

The Game Has Changed

GitHub Game Off III

GitHub's Game Off is back, and this year it's a little different!

The Challenge

Take an existing game or game jam entry on GitHub, fork it and do something awesome with it. You can change the sprites, add a soundtrack, add a new level, port the game to a different platform, or... go plain crazy. Tackle it yourself or team up with some friends. Let your imagination run wild! The theme of the jam is... "the game has changed"!

You're encouraged to use open source libraries, frameworks, graphics, and sounds in your game, but you're free to use any technology you want. The only restriction is that the game should be web-based i.e. playable in a web browser.

We'll feature some of our favorite and most creative entries on the GitHub blog.

Where to start

GitHub is a goldmine of content when it comes to games. Take a look at the following resources to see if there's one you'd be intersted in forking and jamming on:

Please feel free to suggest others on Twitter using the hashtag #ggo15.


  • If you don't already have a GitHub account, sign up for a personal account now - it's free!
  • Be sure to follow @github on Twitter for updates.
  • Once you've found a game repository, fork it to your personal or organization GitHub account and get jamming!
  • Make sure your code is pushed to the default branch of your forked repository before April 13th at 13:37pm PDT!
  • Finally, fill out this short form and tell us about your entry by April 13th at 13:37pm PDT.

Comments / Questions / Help