Skip to content

Write line notes from your phone

We love using GitHub to write notes on specific lines in a diff — now it's super easy to do from any smartphone!

Just bring up your favorite pull request or commit, tap the line you'd like to write a note on, and start the conversation!

photo of line notes being written on an iPhone

Security: Heartbleed vulnerability

On April 7, 2014 information was released about a new vulnerability (CVE-2014-0160) in OpenSSL, the cryptography library that powers the vast majority of private communication across the Internet. This library is key for maintaining privacy between servers and clients, and confirming that Internet servers are who they say they are.

This vulnerability, known as Heartbleed, would allow an attacker to steal the keys that protect communication, user passwords, even the system memory of a vulnerable server. This represents a major risk to large portions of private traffic on the Internet, including github.com.

Note: GitHub Enterprise servers are not affected by this vulnerability. They run an older OpenSSL version which is not vulnerable to the attack.

As of right now, we have no indication that the attack has been used against github.com. That said, the nature of the attack makes it hard to detect so we're proceeding with a high level of caution.

What is GitHub doing about this?

UPDATE: 2014-04-08 16:00 PST - All browser sessions that were active prior to the vulnerability being addressed have been reset. See below for more info.

We've completed a number of measures already and continue to work the issue.

  1. We've patched all our systems using the newer, protected versions of OpenSSL. We started upgrading yesterday after the vulnerability became public and completed the roll out today. We are also working with our providers to make sure they're upgrading their systems to minimize GitHub's exposure.

  2. We've recreated and redeployed new SSL keys and reset internal credentials. We have also revoked our older certs just to be safe.

  3. We've forcibly reset all browser sessions that were active prior to the vulnerability being addressed on our servers. You may have been logged out and have to log back into GitHub. This was a proactive measure to defend against potential session hijacking attacks that may have taken place while the vulnerability was open.

Prior to this incident, GitHub made a number of enhancement to mitigate attacks like this. We deployed Perfect Forward Secrecy at the end of last year, which makes it impossible to use stolen encryption keys to read old encrypted communication. We are working to find more opportunities like this.

What should you do about Heartbleed right now?

Right now, GitHub has no indication that the vulnerability has been used outside of testing scenarios. However, out of an abundance of caution, you can:

  1. Change your GitHub password. Be sure your password is strong; for more information, see What is a strong password?
  2. Enable Two-Factor Authentication.
  3. Revoke and recreate personal access and application tokens.

Stay tuned

GitHub works hard to keep your code safe. We are continuing to respond to this vulnerability and will post updates as things progress. For more information as it's available, keep an eye on Twitter or the GitHub Blog.

OctoTales • UC Berkeley

Computer science professor, Armando Fox, is one of the thousands of teachers who use GitHub to give their students hands-on experience writing software in teams.

On a recent trip to UC Berkeley, we spoke with Armando and some of his students about open source, education, and the essential experience gained by building software for a real customer.

Teachers and students are eligible for free private repositories on GitHub. Learn more at education.github.com.

Partial commits in GitHub for Mac

Sometimes when you’re in the zone, you get a ton of work done before you have a chance to pause and commit. You want to break the commit down to describe the logical changes you’ve made, and it doesn’t always break down cleanly file by file. You want to select some parts of your changes to commit at a time. That’s easy in GitHub for Mac.

Select one or more lines to commit by clicking on the line numbers in the gutter. In the latest release, you can select a block of changes at a time. Hover over the right hand side of the line numbers to get a preview of what will be selected, and click to select.

Animated gif of GitHub for Mac single line/block selection

You can select multiple lines or blocks of changes by clicking and dragging. The left of the line numbers will select line by line, and the right will select block by block.

Now you can commit your selected changes, leaving the rest for a later commit.

L is for Labels

We've added support for editing labels on existing issues with the l hotkey.

l-is-for-labels

You can also edit milestones and assignees the same way.

Collaborating with Lists

At GitHub, we use lists for collaborating on software development, because lists are a simple and powerful tool for collaborating on anything. That's why we're introducing better visualization of list arrangements in our rendered prose diff view.

In Markdown, making a list is incredibly easy. You can make an unordered list by preceding list items with either a * or a -.

* Item
* Item
* Item

Nested lists are very useful for associating supplementary information such as notes to an item. To nest a list, indent the nested items:

* A list item
  * A nested list's first item
  * A nested list's second item
  * A nested list's third item
* Another list item

For example, many teams use issues and pull requests to keep track of what they're working on right now, and use a Backlog to keep track of features that haven't been scheduled yet:

The Product Backlog

Tracking Changes Over Time

Being able to see changes over time gives teams a perspective on the features and requirements that have been added to projects. We can see at a glance when features are added:

Added Items

Removed:

removed Items

Or changed:

Changed Items

Whether numbered or not, the order of items is usually significant. Rendered prose diffs show you when items have been moved up or down:

Moved Items

Work together, better

It's easy to see when list items have been added, removed, changed, or moved, just as it's easy to review changes to all of your documents in GitHub.

And unlike other products that place your documents in their own "silos," you can use as much or as little of the GitHub toolset to manage and track your documents. Pull requests, organizations, commits, repos, issues, comments, source diffs, and rendered prose diffs: Everything is available and everything works together with your development tools.

GitHub makes collaborating with lists 1,337% more awesome by tracking and visualizing the changes over time using the same powerful tools your team already uses to manage your code.

No Conversation Left Behind

If you're anything like us, you get involved in lots of conversations on GitHub over the course of your day. Sometimes, a good conversation from earlier in the day is left behind and forgotten about, and you don't know if anyone else has commented after you (to tell you they completely agree with your well-written opinion, of course!).

To make sure you're always up-to-date, the page title now lets you know how many comments have been added since you last peeked at the conversation.

Unread Tab

When you come back to the conversation, any unread comments will be highlighted, making it easy to pick up right where you left off:

Viewing Unread Comments

Switch your picture with ease

Good news, everyone! Changing your public profile picture just got easier.

  1. Click the "Account Settings" icon in the header.
  2. Upload a picture of your awesome new haircut.
  3. Crop the picture and save it.

your_profile

You can keep using Gravatar; we just want to make it easier to update when the time comes to rebrand yourself.

Recent activity for authentication credentials

In addition to seeing your browser session activity, you can now view activity for your SSH keys and OAuth tokens as well.

SSH key activity

Find the most recent activity for each key in the SSH keys section of your account settings.

SSH keys overview

OAuth token activity

For OAuth tokens, check out the Applications section of your account settings.

OAuth applications overview

As always, we recommend that you keep an eye on these credentials and remove any keys or tokens that you no longer need.

Showcasing interesting projects in Explore

explore

We love watching trending repositories on GitHub every day. All kinds of interesting projects bubble up and there is always something new to catch your eye. We want to collect repositories we find interesting into categories for you.

Showcases are a new way to discover related repositories on GitHub. We take the most interesting trending repositories and curate lists to explore by topic. A lot like the staff shelf at your local book store.

On a showcase page, you'll find the full list of repositories that we're showcasing, including why we think they're special. On the right you will have a place to search all showcases, view related showcases, and any newly created showcases.

You can browse the showcase listing page to read through them all. You can also subscribe to the atom feed and stay up-to-date.

Thanks for reading and happy Exploring! :telescope:

Update on Julie Horvath's Departure

This weekend, GitHub employee Julie Horvath spoke publicly about negative experiences she had at GitHub that contributed to her resignation. I am deeply saddened by these developments and want to comment on what GitHub is doing to address them.

We know we have to take action and have begun a full investigation. While that’s ongoing, and effective immediately, the relevant founder has been put on leave, as has the referenced GitHub engineer. The founder’s wife discussed in the media reports has never had hiring or firing power at GitHub and will no longer be permitted in the office.

GitHub has grown incredibly fast over the past two years, bringing a new set of challenges. Nearly a year ago we began a search for an experienced HR Lead and that person came on board in January 2014. We still have work to do. We know that. However, making sure GitHub employees are getting the right feedback and have a safe way to voice their concerns is a primary focus of the company.

As painful as this experience has been, I am super thankful to Julie for her contributions to GitHub. Her hard work building Passion Projects has made a huge positive impact on both GitHub and the tech community at large, and she's done a lot to help us become a more diverse company. I would like to personally apologize to Julie. It’s certain that there were things we could have done differently. We wish Julie well in her future endeavors.

Chris Wanstrath
CEO & Co-Founder

Repository metadata and plugin support for GitHub Pages

We've added several commonly requested features, making GitHub Pages an even better place to host websites for you and your projects.

Repository metadata

First, Jekyll sites on GitHub Pages now have access to some useful repository information such as the latest SHA1; the project title, owner, and description; common URLs like the download and clone URL; and the exact version of various dependencies used to build your site like Jekyll or Ruby.

Within pages and posts, repository information is available within the site.github namespace, and can be displayed, for example, using {{ site.github.project_title }}.

See the project metadata documentation for the complete list.

@mentions, emoji, and redirects

Second, GitHub Pages now supports three Jekyll plugins:

  • Jemoji and jekyll-mentions enable emoji and @mentions in your Jekyll posts and pages to work just like you'd expect when interacting with a repository on GitHub.com.

  • Jekyll-redirect-from provides an easy way to redirect visitors to the proper url when the filename changes for a post or a page.

For more information on using plugins with GitHub Pages, see the GitHub Pages plugin documentation.

Happy documenting!

Denial of Service Attacks

On Tuesday, March 11th, GitHub was largely unreachable for roughly 2 hours as the result of an evolving distributed denial of service (DDoS) attack. I know that you rely on GitHub to be available all the time, and I'm sorry we let you down. I'd like to explain what happened, how we responded to it, and what we're doing to reduce the impact of future attacks like this.

Background

Over the last year, we have seen a large number and variety of denial of service attacks against various parts of the GitHub infrastructure. There are two broad types of attack that we think about when we're building our mitigation strategy: volumetric and complex.

We have designed our DDoS mitigation capabilities to allow us to respond to both volumetric and complex attacks.

Volumetric Attacks

Volumetric attacks are intended to exhaust some resource through the sheer weight of the attack. This type of attack has been seen with increasing frequency lately through UDP based amplification attacks using protocols like DNS, SNMP, or NTP. The only way to withstand an attack like this is to have more available network capacity than the sum of all of the attacking nodes or to filter the attack traffic before it reaches your network.

Dealing with volumetric attacks is a game of numbers. Whoever has more capacity wins. With that in mind, we have taken a few steps to allow us to defend against these types of attacks.

We operate our external network connections at very low utilization. Our internet transit circuits are able to handle almost an order of magnitude more traffic than our normal daily peak. We also continually evaluate opportunities to expand our network capacity. This helps to give us some headroom for larger attacks, especially since they tend to ramp up over a period of time to their ultimate peak throughput.

In addition to managing the capacity of our own network, we've contracted with a leading DDoS mitigation service provider. A simple Hubot command can reroute our traffic to their network which can handle terabits per second. They're able to absorb the attack, filter out the malicious traffic, and forward the legitimate traffic on to us for normal processing.

Complex Attacks

Complex attacks are also designed to exhaust resources, but generally by performing expensive operations rather than saturating a network connection. Examples of these are things like SSL negotiation attacks, requests against computationally intensive parts of web applications, and the "Slowloris" attack. These kinds of attacks often require significant understanding of the application architecture to mitigate, so we prefer to handle them ourselves. This allows us to make the best decisions when choosing countermeasures and tuning them to minimize the impact on legitimate traffic.

First, we devote significant engineering effort to hardening all parts of our computing infrastructure. This involves things like tuning Linux network buffer sizes, configuring load balancers with appropriate timeouts, applying rate limiting within our application tier, and so on. Building resilience into our infrastructure is a core engineering value for us that requires continuous iteration and improvement.

We've also purchased and installed a software and hardware platform for detecting and mitigating complex DDoS attacks. This allows us to perform detailed inspection of our traffic so that we can apply traffic filtering and access control rules to block attack traffic. Having operational control of the platform allows us to very quickly adjust our countermeasures to deal with evolving attacks.

Our DDoS mitigation partner is also able to assist with these types of attacks, and we use them as a final line of defense.

So what happened?

At 21:25 UTC we began investigating reports of connectivity problems to github.com. We opened an incident on our status site at 21:29 UTC to let customers know we were aware of the problem and working to resolve it.

As we began investigating we noticed an apparent backlog of connections at our load balancing tier. When we see this, it typically corresponds with a performance problem with some part of our backend applications.

After some investigation, we discovered that we were seeing several thousand HTTP requests per second distributed across thousands of IP addresses for a crafted URL. These requests were being sent to the non-SSL HTTP port and were then being redirected to HTTPS, which was consuming capacity in our load balancers and in our application tier. Unfortunately, we did not have a pre-configured way to block these requests and it took us a while to deploy a change to block them.

By 22:35 UTC we had blocked the malicious request and the site appeared to be operating normally.

Despite the fact that things appeared to be stabilizing, we were still seeing a very high number of SSL connections on our load balancers. After some further investigation, we determined that this was an additional vector that the attack was using in an effort to exhaust our SSL processing capacity. We were able to respond quickly using our mitigation platform, but the countermeasures required significant tuning to reduce false positives which impacted legitimate customers. This resulted in approximately 25 more minutes of downtime between 23:05-23:30 UTC.

By 23:34 UTC, the site was fully operational. The attack continued for quite some time even once we had successfully mitigated it, but there were no further customer impacts.

What did we learn?

The vast majority of attacks that we've seen in the last several months have been volumetric in terms of bandwidth, and we'd grown accustomed to using throughput as a way of confirming that we were under attack. This attack did not generate significantly more bandwidth but it did generate significantly more packets per second. It didn't look like what we had grown to expect an attack to look like and we did not have the monitoring we needed to detect it as quickly as we would have liked.

Once we had identified the problem, it took us much longer than we'd like to mitigate it. We had the ability to mitigate attacks of this nature in our load balancing tier and in our DDoS mitigation platform, but they were not configured in advance. It took us valuable minutes to configure, test, and tune these countermeasures which resulted in a longer than necessary downtime.

We're happy that we were able to successfully mitigate the attack but we have a lot of room to improve in terms of how long the process takes.

Next steps?

  1. We have already made adjustments to our monitoring to better detect and alert us of traffic pattern changes that are indicative of an attack. In addition, our robots are now able to automatically enable mitigation for the specific traffic pattern that we saw during the attack. These changes should dramatically reduce the amount of time it takes to respond to a wide variety of attacks in the future and reduce their impact on our service.
  2. We are investigating ways that we can simulate attacks in a controlled way so that we can test our countermeasures on a regular basis to build additional confidence in both our mitigation tools and to improve our response time in bringing them to bear.
  3. We are talking to some 3rd party security consultants to review our DDoS detection and mitigation capability. We do a good job mitigating attacks we've seen before, but we'd like to more proactively plan for attacks that we haven't yet encountered.
  4. Hubot is able to route our traffic through our mitigation partner and to apply templates to operate our mitigation platform for known attack types. We've leveled him up with some new templates for attacks like this one so that he can help us recover faster in the future.

Summary

This attack was painful, and even though we were able to successfully mitigate the effects of it, it took us far too long. We know that you depend on GitHub and our entire company is focused on living up to the trust you place in us. I take problems like this personally. We will do whatever it takes to improve how we respond to problems to ensure that you can rely on GitHub being available when you need us.

Thanks for your support!

Passion Projects Short Documentary: Timoni West

We're now 11 installments into our talk series Passion Projects, which we created to help surface and celebrate the work of incredible women in the tech industry.

We sat down with past speaker Timoni West to talk a little more about her background in design and more specifically, the role the Internet is playing in making data available and consumable for everyday people.

Since filming, Timoni has started working with Alphaworks.

Timezone-aware contribution graphs

Today we've made your contribution graphs timezone-aware. GitHub is used everywhere and we want to reflect that in our features. If you happen to work from Japan, Australia or Ulan Bator, we want to count your contributions from your perspective.

When counting commits, we use the timezone information present in the timestamps for those commits. Pull requests and issues opened on the web will use the timezone of your browser. If you use the API you can also specify your timezone.

We don't want to mess up your current contribution streaks, so only contributions after Monday 10 March 2014 (Temps Universel Coordonné) will be timezone-aware.

Enjoy your time(zone)!

Something went wrong with that request. Please try again.