From 0826b8915992db651791d916c98800955b611416 Mon Sep 17 00:00:00 2001 From: Daniel Stenberg Date: Tue, 19 May 2015 08:30:23 +0200 Subject: [PATCH] http2: more refreshed --- http2.fodt | 642 ++++++++++++++++++++++++++++------------------------- 1 file changed, 334 insertions(+), 308 deletions(-) diff --git a/http2.fodt b/http2.fodt index 0c23a30..56eca7a 100644 --- a/http2.fodt +++ b/http2.fodt @@ -1,24 +1,24 @@ - 2014-04-11T13:45:38.7305499022015-05-18T22:43:37.570392642P88DT3H13M42S456LibreOffice/4.4.2.2$Linux_X86_64 LibreOffice_project/40m0$Build-2http2 HTTP HTTP/2 networking protocol explainedThis document describes HTTP/2 at a technical and protocol level. Background,the protocol, the implementations and the future.http2 explainedDaniel StenbergFFFFFFNoneThis document describes http2 at a technical and protocol level. Background,the protocol, the implementations and the future.enDaniel Stenberg2014-04-26http2 http networking protocol explainedhttp2 explained0100 %080 %Index211101502 + 2014-04-11T13:45:38.7305499022015-05-19T08:30:14.698723814P88DT12H33M459LibreOffice/4.4.2.2$Linux_X86_64 LibreOffice_project/40m0$Build-2http2 HTTP HTTP/2 networking protocol explainedThis document describes HTTP/2 at a technical and protocol level. Background,the protocol, the implementations and the future.http2 explainedDaniel StenbergFFFFFFNoneThis document describes http2 at a technical and protocol level. Background,the protocol, the implementations and the future.enDaniel Stenberg2014-04-26http2 http networking protocol explainedhttp2 explained0100 %080 %Index211101502 - 114725 + 56993 0 26430 - 28215 + 28213 true false view2 - 10289 - 125238 + 4711 + 33696 0 - 114725 + 56993 26428 - 142939 + 85205 0 1 false @@ -82,7 +82,7 @@ false 0 - 20831877 + 20924875 false true @@ -2527,7 +2527,10 @@ - + + + + @@ -2535,20 +2538,20 @@ - + - + - + @@ -2556,88 +2559,97 @@ - + - + - + - + - + - + - + - + + + + - + - + - + - + - + - + - + - - - + + + - + - + - + - + - + - + - + + + + + + + @@ -2645,7 +2657,7 @@ - + @@ -2653,7 +2665,7 @@ - + @@ -2661,7 +2673,7 @@ - + @@ -2669,36 +2681,36 @@ - + - + - + - + - + - - + + - + - + @@ -2718,7 +2730,7 @@ - + @@ -2738,7 +2750,7 @@ - + @@ -2756,7 +2768,7 @@ - + @@ -2776,7 +2788,7 @@ - + @@ -2794,7 +2806,7 @@ - + @@ -2814,7 +2826,7 @@ - + @@ -2834,7 +2846,7 @@ - + @@ -2852,7 +2864,7 @@ - + @@ -3424,6 +3436,9 @@ + + + @@ -4415,7 +4430,7 @@ - 33 + 2 @@ -5614,83 +5629,84 @@ Table of Contents - 1.Background4 - 1.1.Author4 - 1.2.Help!4 - 1.3.License4 - 1.4.Document history4 - 2.HTTP today6 - 2.1.HTTP 1.1 is huge6 - 2.2.A world of options6 - 2.3.Inadequate use of TCP6 - 2.4.Transfer sizes and number of objects7 - 2.5.Latency kills7 - 2.6.Head of line blocking8 - 3.Things done to overcome latency pains9 - 3.1.Spriting9 - 3.2.Inlining9 - 3.3.Concatenation9 - 3.4.Sharding10 - 4.Updating HTTP11 - 4.1.IETF and the HTTPbis working group11 - 4.1.1.The “bis” part of the name11 - 4.2.http2 started from SPDY12 - 5.http2 concepts13 - 5.1.http2 for existing URI schemes13 - 5.2.http2 for https://14 - 5.3.http2 negotiation over TLS14 - 5.4.http2 for http://14 - 6.The http2 protocol16 - 6.1.Binary16 - 6.2.The binary format16 - 6.3.Multiplexed streams17 - 6.4.Priorities and dependencies18 - 6.5.Header compression18 - 6.5.1.Compression is a tricky subject18 - 6.6.Reset - change your mind19 - 6.7.Server push19 - 6.8.Flow Control19 - 7.Extensions20 - 7.1.Alternative Services20 - 7.1.1.Opportunistic TLS20 - 7.2.Blocked20 - 8.A http2 world22 - 8.1.How will http2 affect ordinary humans?22 - 8.2.How will http2 affect web development?22 - 8.3.http2 implementations23 - 8.3.1.Missing implementations23 - 8.4.Common critiques of http223 - 8.4.1.“The protocol is designed or made by Google”24 - 8.4.2.“The protocol is only useful for browsers”24 - 8.4.3.“The protocol is only useful for big sites”24 - 8.4.4.“Its use of TLS makes it slower”24 - 8.4.5.“Not being ASCII is a deal-breaker”25 - 8.4.6.“It isn't any faster than HTTP/1.1”25 - 8.4.7.“It has layering violations”25 - 8.4.8.“It doesn't fix several HTTP/1.1 shortcomings”25 - 8.5.Will http2 become widely deployed?26 - 9.http2 in Firefox27 - 9.1.First, make sure it is enabled27 - 9.2.TLS-only27 - 9.3.Transparent!27 - 9.4.Visualize HTTP/2 use28 - 10.http2 in Chromium29 - 10.1.First, make sure it is enabled29 - 10.2.TLS-only29 - 10.3.Visualize HTTP/2 use29 - 11.http2 in curl30 - 11.1.HTTP 1.x look-alike30 - 11.2.Plain text, insecure30 - 11.3.TLS and what libraries30 - 11.4.Command line use30 - 11.5.libcurl options30 - 12.After http231 - 12.1.QUIC31 - 13.Further reading32 - 14.Thanks33 + 1.Background4 + 1.1.Author4 + 1.2.Help!4 + 1.3.License4 + 1.4.Document history4 + 2.HTTP today6 + 2.1.HTTP 1.1 is huge6 + 2.2.A world of options6 + 2.3.Inadequate use of TCP6 + 2.4.Transfer sizes and number of objects7 + 2.5.Latency kills7 + 2.6.Head of line blocking8 + 3.Things done to overcome latency pains9 + 3.1.Spriting9 + 3.2.Inlining9 + 3.3.Concatenation9 + 3.4.Sharding10 + 4.Updating HTTP11 + 4.1.IETF and the HTTPbis working group11 + 4.1.1.The “bis” part of the name11 + 4.2.http2 started from SPDY12 + 5.http2 concepts13 + 5.1.http2 for existing URI schemes13 + 5.2.http2 for https://14 + 5.3.http2 negotiation over TLS14 + 5.4.http2 for http://14 + 6.The http2 protocol16 + 6.1.Binary16 + 6.2.The binary format16 + 6.3.Multiplexed streams17 + 6.4.Priorities and dependencies18 + 6.5.Header compression18 + 6.5.1.Compression is a tricky subject18 + 6.6.Reset - change your mind19 + 6.7.Server push19 + 6.8.Flow Control19 + 7.Extensions20 + 7.1.Alternative Services20 + 7.1.1.Opportunistic TLS20 + 7.2.Blocked20 + 8.A http2 world22 + 8.1.How will http2 affect ordinary humans?22 + 8.2.How will http2 affect web development?22 + 8.3.http2 implementations23 + 8.3.1.Missing implementations23 + 8.4.Common critiques of http223 + 8.4.1.“The protocol is designed or made by Google”24 + 8.4.2.“The protocol is only useful for browsers”24 + 8.4.3.“The protocol is only useful for big sites”24 + 8.4.4.“Its use of TLS makes it slower”24 + 8.4.5.“Not being ASCII is a deal-breaker”25 + 8.4.6.“It isn't any faster than HTTP/1.1”25 + 8.4.7.“It has layering violations”25 + 8.4.8.“It doesn't fix several HTTP/1.1 shortcomings”25 + 8.5.Will http2 become widely deployed?26 + 9.http2 in Firefox27 + 9.1.First, make sure it is enabled27 + 9.2.TLS-only27 + 9.3.Transparent!27 + 9.4.Visualize http2 use28 + 10.http2 in Chromium29 + 10.1.First, make sure it is enabled29 + 10.2.TLS-only29 + 10.3.Visualize HTTP/2 use29 + 10.4.QUIC29 + 11.http2 in curl30 + 11.1.HTTP 1.x look-alike30 + 11.2.Plain text, insecure30 + 11.3.TLS and what libraries30 + 11.4.Command line use30 + 11.5.libcurl options30 + 12.After http231 + 12.1.QUIC31 + 13.Further reading32 + 14.Thanks33 - + Background @@ -5698,14 +5714,14 @@ This is a document describing http2 from a technical and protocol level. It started out as a presentation I did in Stockholm in April 2014 that was then converted and extended into a full-blown document with all details and proper explanations. RFC 7540 is the official name of the final http2 specification and it was published on May 15th 2015: http://www.rfc-editor.org/rfc/rfc7540.txt - All and any errors in this document are my own and the results of my shortcomings. Please point them out to me and I might do updates with corrections. + All and any errors in this document are my own and the results of my shortcomings. Please point them out to me and I might do updates with corrections. In this document I've tried to consistently use the word “http2” to describe the new protocol while in pure technical terms, the proper name is HTTP/2. I made this choice for the sake of readability and to get a better flow in the language. - This is document version 1.12, published on May 18, 2015. + This is document version 1.12, published on May 19, 2015. - Author + Author @@ -5715,22 +5731,22 @@ Twitter:@bagder Web:daniel.haxx.se Blog:daniel.haxx.se/blog - + - Help! + Help! If you find mistakes, omissions, errors or blatant lies in this document, please send me a refreshed version of the affected paragraph and I'll make amended versions. I will give proper credits to everyone who helps out! I hope to make this document better over time. This document is available at http://daniel.haxx.se/http2 - + - + iVBORw0KGgoAAAANSUhEUgAAAGQAAABkCAYAAABw4pVUAAAOrUlEQVR4nO2db6hO2R7Hf3Up L5RTvDiK5rxwu5TbHCFuyBG6pqE5U5QJOUIIIYQiR8hMRkNG3JAjJoQcIYQccRvTHDkyt/GC OkLXLbcodb3w4u7P2s8+s8/jefb6rf3nec6/b/1mnHP23ms967vX+v1d6+klnQcVntR4MtKT @@ -5811,11 +5827,11 @@ This document is licensed under the Creative Commons Attribution 4.0 license: http://creativecommons.org/licenses/by/4.0/ - + - Document history + Document history @@ -5824,113 +5840,113 @@ Version 1.12: - 1.1: HTTP/2 is now in an official RFC + 1.1: HTTP/2 is now in an official RFC - 6.5.1: link to the HPACK RFC + 6.5.1: link to the HPACK RFC - 9.1: mention the Firefox 36+ config switch for http2 + 9.1: mention the Firefox 36+ config switch for http2 - 12.1: Added section about QUIC + 12.1: Added section about QUIC Version 1.11: - + - Lots of language improvements mostly pointed out by friendly contributors + Lots of language improvements mostly pointed out by friendly contributors - 8.3.1: mention nginx and Apache httpd specific acitivities + 8.3.1: mention nginx and Apache httpd specific acitivities Version 1.10: - 1: the protocol has been “okayed” + 1: the protocol has been “okayed” - 4.1: refreshed the wording since 2014 is last year + 4.1: refreshed the wording since 2014 is last year - front: added image and call it “http2 explained” there, fixed link + front: added image and call it “http2 explained” there, fixed link - 1.4: added document history section + 1.4: added document history section - many spelling and grammar mistakes corrected + many spelling and grammar mistakes corrected - 14: added thanks to bug reporters + 14: added thanks to bug reporters - 2.4: (better) labels for the HTTP growth graph + 2.4: (better) labels for the HTTP growth graph - 6.3: corrected the wagon order in the multiplexed train + 6.3: corrected the wagon order in the multiplexed train - 6.5.1: HPACK draft-12 + 6.5.1: HPACK draft-12 Version 1.9: February 11, 2015 - Updated to HTTP/2 draft-17 and HPACK draft-11 + Updated to HTTP/2 draft-17 and HPACK draft-11 - Added section "10. http2 in Chromium" (== one page longer now) + Added section "10. http2 in Chromium" (== one page longer now) - Lots of spell fixes + Lots of spell fixes - At 30 implementations now + At 30 implementations now - 8.5: added some current usage numbers + 8.5: added some current usage numbers - 8.3: mention internet explorer too + 8.3: mention internet explorer too - 8.3.1 "missing implementations" added + 8.3.1 "missing implementations" added - 8.4.3: mention that TLS also increases success rate + 8.4.3: mention that TLS also increases success rate Version 1.8: January 15th, 2015 - Compressed the images better, leading to a much smaller PDF + Compressed the images better, leading to a much smaller PDF - Updated to draft-16 and hpack-10 + Updated to draft-16 and hpack-10 - Replaced several images + Replaced several images - Linkified many URLs + Linkified many URLs - Added a few questions in 8.4 + Added a few questions in 8.4 - Mentions IETF Last Call + Mentions IETF Last Call - + HTTP today HTTP 1.1 has turned into a protocol used for virtually everything on the Internet. Huge investments have been done on protocols and infrastructure that take advantage of this. This is taken to the extent that it is often easier today to make things run on top of HTTP rather than building something new on its own. - + @@ -5940,7 +5956,7 @@ When HTTP was created and thrown out into the world it was probably perceived as a rather simple and straightforward protocol, but time has proved that to be false. HTTP 1.0 in RFC 1945 is a 60 page specification released in 1996. RFC 2616 that describes HTTP 1.1 was released only three years later in 1999 and had grown significantly to 176 pages. Yet, when we within IETF worked on the update to that spec, it was split up and converted into six documents, with a much larger page count in total (resulting in RFC 7230 and family). By any count, HTTP 1.1 is big and includes a myriad of details, subtleties and not the least a lot of optional parts. - + @@ -5951,7 +5967,7 @@ HTTP 1.1's nature of having lots of tiny details and options available for later extensions have grown a software ecosystem where almost no implementations ever implement everything – and it isn't even really possible to exactly tell what “everything” is. This has lead to a situation where features that were initially little used saw very few implementations and those who did implement the features then saw very little use of them. Then later on, it caused an interoperability problem when clients and servers started to increase the use of such features. HTTP Pipelining is a primary example of such a feature. - + @@ -5964,7 +5980,7 @@ Other attempts that have been going on in parallel over the years have also confirmed that TCP is not that easy to replace and thus we keep working on improving both TCP and the protocols on top of it. Simply put, TCP can be utilized better to avoid pauses or moments in time that could have been used to send or receive more data. The following sections will highlight some of these shortcomings. - + @@ -5975,46 +5991,46 @@ When looking at the trend for some of the most popular sites on the web today and what it takes to download their front pages, a clear pattern emerges. Over the years the amount of data that needs to be retrieved has gradually risen up to and above 1.9MB . What is more important in this context is that on average over a hundred individual resources are required to display each page. As the graph below shows, the trend has been going on for a while and there is little to no indication that it'll change anytime soon. It shows the growth of the total transfer size (in green) and the total number of requests used on average (in red) to serve the most popular web sites in the world, and how they have changed over the last four years. - + - + - + - 1900K transfer size + 1900K transfer size - + - 100 objects + 100 objects - + - + - + - 77 objects + 77 objects - + - 725Ktransfer size + 725Ktransfer size - + - 2011 - 2015 + 2011 - 2015 data from httparchive.org - + @@ -6025,7 +6041,7 @@ - + iVBORw0KGgoAAAANSUhEUgAAAcIAAAFACAIAAACgAuvBAAAwwUlEQVR4nO3de1wU9f4/8OUi olRHPSjgPe7rJUgTlZMgZIqdoNQAXUPEUkENgcUL4qUUzBuGJzxFVmLGZuTlaxqinsCDJ7S+ @@ -6268,7 +6284,7 @@ HTTP 1.1 is very latency sensitive, partly because HTTP Pipelining is still riddled with enough problems to remain switched off to a large percentage of users. While we've seen a great increase in available bandwidth to people over the last few years, we have not seen the same level of improvements in reducing latency. High latency links, like many of the current mobile technologies, make it really hard to get a good and fast web experience even if you have a really high bandwidth connection. Another use case that really needs low latency is certain kinds of video, like video conferencing, gaming and similar where there's not just a pre-generated stream to send out. - + @@ -6278,7 +6294,7 @@ HTTP Pipelining is a way to send another request while waiting for the response to a previous request. It is very similar to queuing at a counter at the bank or in a super market. You just don't know if the person in front of you is a quick customer or that annoying one that will take forever before he/she is done: head of line blocking. - + iVBORw0KGgoAAAANSUhEUgAACiwAAAhPCAYAAABBpDEWADoFt0lEQVR4nJz9i5ItO3JdC1LS llRF0fTT95dFVlGv7kbZHcdGDU6P3GyYpa1c8QD8Od3hQMT69f/8P//P/+f/1/7hf//v//0P @@ -76707,14 +76723,14 @@ Even today, 2015, most desktop web browsers ship with HTTP pipelining disabled by default. Additional reading on this subject can be found for example in the Firefox bugzilla entry 2643541 https://bugzilla.mozilla.org/show_bug.cgi?id=264354. - - + + Things done to overcome latency pains As always when faced with problems, people gather to find workarounds. Some of the workarounds are clever and useful, some of them are just awful kludges. - + @@ -76723,7 +76739,7 @@ - + iVBORw0KGgoAAAANSUhEUgAAAyAAAAIUCAYAAADi7EhnAAoerklEQVR4nNT9B5wc2XUfCv8r d+6ePINBBhZYbM7MWRJFSqRkBSpYkknLcn5+T0/fs579/Rxkf7a/J1mWbcmSLMvKpihSFCnm @@ -89014,7 +89030,7 @@ Spriting is the term often used to describe when you put a lot of small images together into a single large image. Then you use javascript or CSS to “cut out” pieces of that big image to show smaller individual ones. A site would use this trick for speed. Getting a single big image is much faster in HTTP 1.1 than getting a 100 smaller individual ones. Of course this has its downsides for the pages of the site that only want to show one or two of the small pictures and similar. It also makes all pictures get evicted from the cache at the same time instead of possibly letting the most commonly used ones remain. - + @@ -89024,18 +89040,18 @@ Inlining is another trick to avoid sending individual images, and this is done by using data: URLs embedded in the CSS file. This has similar benefits and drawbacks as the spriting case. - + - - .icon1 { - background: url(data:image/png;base64,<data>) no-repeat; - } - - .icon2 { - background: url(data:image/png;base64,<data>) no-repeat; - } + + .icon1 { + background: url(data:image/png;base64,<data>) no-repeat; + } + + .icon2 { + background: url(data:image/png;base64,<data>) no-repeat; + } Concatenation @@ -89044,7 +89060,7 @@ A big site can end up with a lot of different javascript files. Front-end tools will help developers merge everyone of them into a single huge lump so that the browser will get a single big one instead of dozens of smaller files. Too much data is sent when only little is needed. Too much data needs to be reloaded when a change is needed. This practice is of course mostly an inconvenience to the developers involved. - + @@ -89057,7 +89073,7 @@ Initially the HTTP 1.1 specification stated that a client was allowed to use maximum of two TCP connections for each host. So, in order to not violate the spec clever sites simply invented new host names and – voilá - you could get more connections to your site and decreased page load times. Over time, that limitation was removed and today clients easily use 6-8 connections per host name but they still have a limit so sites continue to use this technique to bump the number of connections. As the number of objects are ever increasing – as I showed before – the large number of connections are then used just to make sure HTTP performs well and makes your site fast. It is not unusual for sites to use well over 50 or even up to and beyond 100 connections now for a single site using this technique. Recent stats from httparchive.org show that the top 300K URLs in the world need on average 38(!) TCP connections to display the site, and the trend says this is still increasing slowly over time. Another reason is also to put images or similar resources on a separate host name that doesn't use any cookies, as the size of cookies these days can be quite significant. By using cookie-free image hosts you can sometimes increase performance simply by allowing much smaller HTTP requests! - + iVBORw0KGgoAAAANSUhEUgAAA4sAAAKOCAIAAACAyxszAAKL5UlEQVR4nOydB1zTThvH073Y Q1yAMgT33rgniuBWUERQREBB9nArKiDiABVBFAFxIuIWcf/d63VvQBREmWUUOvMmFGstbSlQ @@ -92155,7 +92171,7 @@ The picture below shows how a packet trace looks like when browsing one of Sweden's top web sites and how requests are distributed over several host names. - + Updating HTTP @@ -92163,22 +92179,22 @@ Wouldn't it be nice to make an improved protocol? It would include... - Make a protocol that's less RTT sensitive + Make a protocol that's less RTT sensitive - Fix pipelining and the head of line blocking problem + Fix pipelining and the head of line blocking problem - Stop the need for and desire to keep increasing the number of connections to each host + Stop the need for and desire to keep increasing the number of connections to each host - Keep all existing interfaces, all content, the URI formats and schemes + Keep all existing interfaces, all content, the URI formats and schemes - This would be made within the IETF's HTTPbis working group + This would be made within the IETF's HTTPbis working group - + @@ -92192,7 +92208,7 @@ The HTTPbis working group (see later for an explanation of the name) was formed during the summer of 2007 and tasked with creating an update of the HTTP 1.1 specification. Within this group the discussions about a next-version HTTP really started during late 2012. The HTTP 1.1 updating work was completed early 2014 and resulted in the RFC 7320 series. The supposedly final inter-op meeting for the HTTPbis WG was held in New York City in the beginning of June 2014. The remaining discussions and the IETF procedures to actually get an official RFC out would prove to continue over into the following year. Some of the bigger players in the HTTP field have been missing from the working group discussions and meetings. I don't want to mention any particular company or product names here, but clearly some actors on the Internet today seem to be confident that IETF will do good without these companies being involved... - + @@ -92207,7 +92223,7 @@ The group is named HTTPbis where the “bis” part comes from the Latin adverb for "two"2 http://en.wiktionary.org/wiki/bis#Latin. Bis is commonly used as a suffix or part of the name within the IETF for an update or the second take on a spec. Like in this case for HTTP 1.1. - + @@ -92220,7 +92236,7 @@ http://en.wikipedia.org/wiki/SPDY is a protocol that was developed and spearheaded by Google. They certainly developed it in the open and invited everyone to participate but it was obvious that they benefited by being in control over both a popular browser implementation and a significant server population with well-used services. When the HTTPbis group decided it was time to start working on http2, SPDY had already proven that it was a working concept. It had shown it was possible to deploy on the Internet and there were numbers published that proved how it performed. The http2 work then subsequently started off from the SPDY/3 draft that was basically made into the http2 draft-00 with a little search and replace. - + http2 concepts @@ -92229,7 +92245,7 @@ They were actually quite strict and put quite a few restraints on the team's ability to innovate. - + /9j/4AAQSkZJRgABAQEASABIAAD/4SRmRXhpZgAASUkqAAgAAAAJAA8BAgAGAAAAegAAABAB AgAaAAAAgAAAABIBAwABAAAAAQAAABoBBQABAAAAmgAAABsBBQABAAAAogAAACgBAwABAAAA @@ -94554,22 +94570,22 @@ It has to maintain HTTP paradigms. It is still a protocol where the client sends requests to the server over TCP. - http:// and https:// URLs cannot be changed. There can be no new scheme for this. The amount of content using such URLs is too big to expect them to change. + http:// and https:// URLs cannot be changed. There can be no new scheme for this. The amount of content using such URLs is too big to expect them to change. - HTTP1 servers and clients will be around for decades, we need to be able to proxy them to http2 servers. + HTTP1 servers and clients will be around for decades, we need to be able to proxy them to http2 servers. - Subsequently, proxies must be able to map http2 features to HTTP 1.1 clients 1:1. + Subsequently, proxies must be able to map http2 features to HTTP 1.1 clients 1:1. - Remove or reduce optional parts from the protocol. This wasn't really a requirement but more a mantra coming over from SPDY and the Google team. By making sure everything is mandatory there's no way you can not implement anything now and fall into a trap later on. + Remove or reduce optional parts from the protocol. This wasn't really a requirement but more a mantra coming over from SPDY and the Google team. By making sure everything is mandatory there's no way you can not implement anything now and fall into a trap later on. - No more minor version. It was decided that clients and servers are either compatible with http2 or they are not. If there comes a need to extend the protocol or modify things, then http3 will be born. There are no more minor versions in http2. + No more minor version. It was decided that clients and servers are either compatible with http2 or they are not. If there comes a need to extend the protocol or modify things, then http3 will be born. There are no more minor versions in http2. - + @@ -94581,7 +94597,7 @@ As mentioned already the existing URI schemes cannot be modified so http2 has to be done using the existing ones. Since they are used for HTTP 1.x today, we obviously need to have a way to upgrade the protocol to http2 or otherwise ask the server to use http2 instead of older protocols. HTTP 1.1 has a defined way how to do this, namely the Upgrade: header, which allows the server to send back a response using the new protocol when getting such a request over the old protocol. At a cost of a round-trip. That round-trip penalty was not something the SPDY team would accept, and as they also only implemented SPDY over TLS they developed a new TLS extension which is used to shortcut the negotiation quite significantly. Using this extension, called NPN for Next Protocol Negotiation, the server tells the client which protocols it knows and the client can then proceed and use the protocol it prefers. - + @@ -94590,8 +94606,8 @@ - A lot of focus of http2 has been to make it behave properly over TLS. SPDY is only done over TLS and there's been a strong push for making TLS mandatory for http2 but it didn't get consensus and http2 will ship with TLS as optional. However, two prominent implementers have stated clearly that they will only implement http2 over TLS: the Mozilla Firefox lead and the Google Chrome lead. Two of the leading web browsers of today. - + A lot of focus of http2 has been to make it behave properly over TLS. SPDY is only done over TLS and there's been a strong push for making TLS mandatory for http2 but it didn't get consensus and so http2 shipped with TLS as optional. However, two prominent implementers have stated clearly that they will only implement http2 over TLS: the Mozilla Firefox lead and the Google Chrome lead. Two of the leading web browsers of today. + iVBORw0KGgoAAAANSUhEUgAAAfQAAAG/CAYAAACuSSUlAAAACXBIWXMAAAsTAAALEwEAmpwY AAWEEklEQVR4nOy9CbRk2VUduO+9b46IH3/O/DlWVmZNqlIJEBJiWEgLMO0GGgFtsNWG9jI2 @@ -101294,8 +101310,8 @@ Reasons for choosing TLS-only include respect for user's privacy and early measurements showing that new protocols have a higher success rate when done with TLS. This because of the widespread assumption that anything that goes over port 80 is HTTP 1.1 makes some middle-boxes interfere and destroy traffic when instead other protocols are communicated there. The subject about mandatory TLS has caused much hand-waving and agitated voices in mailing lists and meetings – is it good or is it evil? It is an infected subject – be aware of this when you throw this question in the face of a HTTPbis participant! - Similarly, there's been a fierce and long-going debate on whether http2 should dictate a list of ciphers that should be mandatory when using TLS, or if it perhaps should blacklist a set or if it shouldn't require anything at all from the TLS “layer” but leave that to the TLS WG. - + Similarly, there's been a fierce and long-going debate on whether http2 should dictate a list of ciphers that should be mandatory when using TLS, or if it perhaps should blacklist a set or if it shouldn't require anything at all from the TLS “layer” but leave that to the TLS WG. The spec ended up specifying that TLS should be at least version 1.2 and there are cipher suite restrictions. + @@ -101307,7 +101323,7 @@ Next Protocol Negotiation (NPN), is the protocol used to negotiate SPDY with TLS servers. As it wasn't a proper standard, it was taken through the IETF and ALPN came out of that: Application Layer Protocol Negotiation. ALPN is what is being promoted to be used for http2, while SPDY clients and servers still use NPN. The fact that NPN existed first and ALPN has taken a while to go through standardization has lead to many early http2 clients and http2 servers implementing and using both these extensions when negotiating http2. Also, as NPN is what's used for SPDY and many servers offer both SPDY and http2 so supporting both NPN and ALPN on those servers make perfect sense. ALPN primarily differs from NPN in who decides what protocol to speak. With ALPN the client tells the server a list of protocols in its order of preference and the server picks the one it wants, while with NPN the client makes that final choice. - + @@ -101316,16 +101332,16 @@ - As mentioned briefly previously, for plain-text HTTP 1.1 the way to negotiate http2 is by asking the server with an Upgrade: header. If the server speaks http2 it responds with a “101 Switching” status and from then on it speaks http2 on that connection. You of course realize that this upgrade procedure costs a full network round-trip, but the upside is that a http2 connection should be possible to keep alive and re-use to a larger extent than HTTP1 connections generally are. + As mentioned briefly previously, for plain-text HTTP 1.1 the way to negotiate http2 is by asking the server with an Upgrade: header. If the server speaks http2 it responds with a “101 Switching” status and from then on it speaks http2 on that connection. You of course realize that this upgrade procedure costs a full network round-trip, but the upside is that a http2 connection should be possible to keep alive and re-use to a larger extent than HTTP1 connections generally are. While some browsers' spokespersons have stated they will not implement this means of speaking http2, the Internet Explorer team has expressed that they will, and curl already supports this. - + The http2 protocol Enough about the background, the history and politics behind what took us here. Let's dive into the specifics of the protocol. The bits and the concepts that create http2. - + @@ -101340,7 +101356,7 @@ Also, it makes it much easier to separate the actual protocol parts from the framing - which in HTTP1 is confusingly intermixed. The facts that the protocol features compression and often will run over TLS also diminish the value of text since you won't see text over the wire anyway. We simply have to get used to the idea to use a Wireshark inspector or similar to figure out exactly what's going on at protocol level in http2. Debugging of this protocol will instead probably have to be done with tools like curl or by analyzing the network stream with Wireshark's http2 dissector and similar. - + @@ -102877,7 +102893,7 @@ http2 sends binary frames. There are different frame types that can be sent and they all have the same setup: Type, Length, Flags, Stream Identifier and frame payload. There are ten different frames defined in the http2 spec and the two perhaps most fundamental ones that map HTTP 1.1 features are DATA and HEADERS. I'll describe some of the frames in closer detail further on. - + @@ -102888,7 +102904,7 @@ The Stream Identifier mentioned in the previous section describing the binary frame format, makes each frame sent over http2 get associated with a “stream”. A stream is a logical association. An independent, bi-directional sequence of frames exchanged between the client and server within an http2 connection. A single http2 connection can contain multiple concurrently open streams, with either endpoint interleaving frames from multiple streams. Streams can be established and used unilaterally or shared by either the client or server and they can be closed by either endpoint. The order in which frames are sent within a stream is significant. Recipients process frames in the order they are received. - + iVBORw0KGgoAAAANSUhEUgAAA/QAAAHcCAYAAAB4ReEQAAAABmJLR0QA/wD/AP+gvaeTAAAA CXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH3gIHFiEKdxafGQAAIABJREFUeNrsvdeTJMeV @@ -109927,7 +109943,7 @@ - + iVBORw0KGgoAAAANSUhEUgAAB3oAAAJ8CAYAAAAYpKBTAAAABmJLR0QA/wD/AP+gvaeTAAAA CXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH3gIHFiEUjRmiegAAIABJREFUeNrsvVmPJFt2 @@ -145117,7 +145133,7 @@ In http2 we will see tens and hundreds of simultaneous streams. The cost of creating a new one is very low. - + @@ -145129,7 +145145,7 @@ Each stream also has a priority, which is used to tell the peer which streams to consider most important. The exact details on how priorities work in the protocols have changed several times and are still being debated. The point is however that a client can specify which streams that are most important and there's a dependency parameter so that one stream can be made dependent on another. The priorities can be changed dynamically in run-time, which should enable browsers to make sure that when users scroll down a page full of images it can specify which images that are most important, or if you switch tabs it can prioritize a new set of streams then suddenly come into focus. - + @@ -145140,7 +145156,7 @@ HTTP is a state-less protocol. In short that means that every request needs to bring with it as much details as the server needs to serve that request, without the server having to store a lot of info and meta-data from previous requests. Since http2 doesn't change any such paradigms, it too has to do this. This makes HTTP repetitive. When a client asks for many resources from the same server, like images from a web page, there will be a large series of requests that all look almost identical. A series of almost identical something begs for compression. - + iVBORw0KGgoAAAANSUhEUgAAA84AAALbCAYAAAArednEAAAACXBIWXMAAAsTAAALEwEAmpwY AAZW/0lEQVR4nOy9x48ke3bv942M9N5Vlve2vbnejTfEE8EhBeIBgrTRhgQE6G8Rd9RCG1Ei @@ -152843,7 +152859,7 @@ While the number of objects per web page increases as I've mentioned earlier, the use of cookies and the size of the requests have also kept growing over time. Cookies also need to be included in all requests, mostly the same over many requests. The HTTP 1.1 request sizes have actually gotten so large over time so they sometimes even end up larger than the initial TCP window, which makes them very slow to send as they need a full round-trip to get an ACK back from the server before the full request has been sent. Another argument for compression. - + @@ -152863,7 +152879,7 @@ Enter HPACK6 http://www.rfc-editor.org/rfc/rfc7541.txt, Header Compression for HTTP/2, which – as the name suitably suggests - is a compression format especially crafted for http2 headers and it is strictly speaking being specified in a separate internet draft. The new format, together with other counter-measures such as a bit that asks intermediaries to not compress a specific header and optional padding of frames should make it harder to exploit this compression. In the words of Roberto Peon (one of the creators of HPACK) “HPACK was designed to make it difficult for a conforming implementation to leak information, to make encoding and decoding very fast/cheap, to provide for receiver control over compression context size, to allow for proxy re-indexing (i.e. shared state between frontend and backend within a proxy), and for quick comparisons of huffman-encoded strings”. - + @@ -152874,7 +152890,7 @@ One of the drawbacks with HTTP 1.1 is that when a HTTP message has been sent off with a Content-Length of a certain size, you can't easily just stop it. Sure you can often (but not always) disconnect the TCP connection but that then comes as the price of having to negotiate a new TCP handshake again. A better solution would be to just stop the message and start anew. This can be done with http2's RST_STREAM frame which will help in preventing wasted bandwidth and avoid having to tear down any connection. - + @@ -152885,18 +152901,18 @@ This is the feature also known as “cache push”. The idea here is that if the client asks for resource X the server may know that the client then also most likely want resource Z and sends that to the client without it asking for it. It helps the client to put Z into its cache so that it'll be there when it wants it. Server push is something a client explicitly must allow the server to do and even if a client does that, it can at its own choice swiftly terminate a pushed stream with RST_STREAM should it not want a particular one. - + - Flow Control + Flow Control Each individual stream over http2 has its own advertised flow window that the other end is allowed to send data for. If you happen to know how SSH works, this is very similar in style and spirit. For every stream both ends have to tell the peer that it has more room to fit incoming data in, and the other end is only allowed to send that much data until the window is extended. Only DATA frames are flow controlled. - + Extensions @@ -152904,7 +152920,7 @@ The protocol mandates that a receiver must read and ignore all unknown frames using unknown frame types. Two parties can thus negotiate use of new frame types on a hop-by-hop basis, and those frames aren't allowed to change state and they will not be flow controlled. The subject of whether http2 should allow extensions at all was debated at length during the time the protocol was developed with opinions swinging for and against. After draft-12 the pendulum swept back one last time and extensions were allowed again. Extensions are then not part of the actual protocol but will be documented outside of the core protocol spec. Already at this point, there are two frame types that have been discussed for inclusion in the protocol that probably will be the first frames sent as extensions. I'll still describe them here just because of their popularity and previous state as “native” frames: - + @@ -152916,9 +152932,9 @@ With http2 getting adopted, there are reasons to suspect that TCP connections will be much lengthier and be kept alive much longer than HTTP 1.x connections have been. A client should be able to do a lot of what it wants with a single connection to each host/site and that single one could then potentially be open for quite some time. This will affect how HTTP load balancers work and there may come situations when a site wants to advertise and suggest that the client connects to another host. It could be for performance reasons but also if a site is being taken down for maintance and similar. The server will then send the Alt-Svc: header7 - http://tools.ietf.org/html/draft-ietf-httpbis-alt-svc-01 (or ALTSVC frame with http2) telling the client about an alternative service. Another route to the same content, using another service, host and port number. + http://tools.ietf.org/html/draft-ietf-httpbis-alt-svc-07 (or ALTSVC frame with http2) telling the client about an alternative service. Another route to the same content, using another service, host and port number. A client is then meant to attempt to connect to that service asynchronously and only use the alternative if that works fine. - + @@ -152933,7 +152949,7 @@ The Alt-Svc header allows a server that provides content over http://, to inform the client that the same content is also available over a TLS connection. This is a somewhat debated feature. Such a connection would do unauthenticated TLS and wouldn't be advertized as “secure” anywhere, wouldn't use any padlock in the UI or in fact in no way tell the user that it isn't plain old HTTP, but this is still opportunistic TLS and some people are very firmly against this concept. - + @@ -152945,13 +152961,13 @@ A frame of this type is meant to be sent exactly once by a http2 party when it has data to send off but flow control forbids it to send any data. The idea being that if your implementation receives this frame you know your implementation has messed up something and/or you're getting less than perfect transfer speeds because of this. A quote from draft-12, before this frame was moved out to become an extension: “The BLOCKED frame is included in this draft version to facilitate experimentation. If the results of the experiment do not provide positive feedback, it could be removed” - + A http2 world So what will things look like when http2 gets adopted? Will it get adopted? - + @@ -152961,7 +152977,7 @@ http2 is not yet widely deployed nor used. We can't tell for sure exactly how things will turn out. We have seen how SPDY has been used and we can make some guesses and calculations based on that and other past and current experiments. - + iVBORw0KGgoAAAANSUhEUgAABQwAAAOGCAYAAACp4d2GAAAABmJLR0QAEQDZADLuTFpyAAAA CXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH3wEKFS8dFpb4oQAAABl0RVh0Q29tbWVudABD @@ -168596,7 +168612,7 @@ With priorities used properly on the streams, chances are much better that clients will actually get the important data before the less important data. All this taken together, I'd say that the chances are very good that this will lead to faster page loads and to more responsive web sites. Shortly put: a better web experience. How much faster and how much improvements we will see, I don't think we can say yet. First, the technology is still very early and then we haven't even started to see clients and servers trim implementations to really take advantage of all the powers this new protocol offers. - + @@ -168609,7 +168625,7 @@ Lots of those workarounds that tools and developers now use by default and without thinking, will probably hurt http2 performance or at least not really take advantage of http2's new super powers. Spriting and inlining should most likely not be done with http2. Sharding will probably be detrimental to http2 as it will probably benefit from using less connections. A problem here is of course that web sites and web developers need to develop and deploy for a world that in the short term at least, will have both HTTP1.1 and http2 clients as users and to get maximum performance for all users can be challenging without having to offer two different front-ends. For these reasons alone, I suspect there will be some time before we will see the full potential of http2 being reached. - + @@ -168618,7 +168634,7 @@ - + iVBORw0KGgoAAAANSUhEUgAABAAAAAQACAYAAAB/HSuDAARH3klEQVR4XuzY2U4TARiGYQY1 aFplKBrBJW4Rr88r8fo8ccO1uLSoNW2N40fyH8wRicRAaB6evPmnwAlHZL6m67o1XwAAALDa @@ -173822,7 +173838,7 @@ Trying to document specific implementations in a document such as this is of course completely futile and doomed to fail and only feel outdated within a really short period of time. Instead I'll explain the situation in broader terms and refer readers to the list of implementations8 https://github.com/http2/http2-spec/wiki/Implementations on the http2 web site. - + iVBORw0KGgoAAAANSUhEUgAABHMAAAOeCAYAAACAqFylAAAACXBIWXMAAC4lAAAuIwGu/Nxr AABfx0lEQVR4nOzdz3VTydY34Bp88+4M8HICkAHOAEaeSh0BvhGgjgA6AqSpJg0RICK4JgFd @@ -174284,7 +174300,7 @@ There was a large amount of implementations already early on, and the amount has increased over time during the http2 work. At the time of writing this there are over 30 implementations listed, and most of them implement the final version. - + iVBORw0KGgoAAAANSUhEUgAAB+oAAALsCAYAAADEaT75AAAACXBIWXMAACTpAAAk6QFQJOf4 AAdzHklEQVR4nOy9a6xtS3YeVGOutfa+957nve22fP3qdluOHFsC4puEAH6gdH4glB8I0Ypk @@ -183334,7 +183350,7 @@ Firefox has been the browser that's been on top of the bleeding edge drafts, Twitter has kept up and offered its services over http2. Google started during April 2014 to offer http2 support on a few test servers running their services and since May 2014 they offer http2 support in their development versions of Chrome. Microsoft has shown a tech preview with http2 support for their next Internet Explorer version. curl and libcurl support insecure http2 as well as the TLS based using on one out of several different TLS libraries. - + @@ -183351,7 +183367,7 @@ Nginx says “we plan to release versions of nginx and NGINX Plus by the end of 2015 that will include support for HTTP/2”9 http://nginx.com/blog/how-nginx-plans-to-support-http2/. There's an early version of a HTTP/2 module for Apache called mod_h210 https://icing.github.io/mod_h2/ claimed to be “Very alpha” - + @@ -183361,7 +183377,7 @@ During the development of this protocol the debate has been going back and forth and of course there is a certain amount of people who believe this protocol ended up completely wrong. I wanted to mention a few of the more common complaints and mention the arguments against them: - + @@ -183377,7 +183393,7 @@ It also has variations implying that the world gets even further dependent or controlled by Google by this. This isn't true. The protocol was developed within the IETF in the same manner that protocols have been developed for over 30 years. However, we all recognize and acknowledge Google's impressive work with SPDY that not only proved that it is possible to deploy a new protocol this way but also provided numbers illustrating what gains could be made. Google has publicly announced11 http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html that they will remove support for SPDY and NPN in Chrome in 2016 and they urge servers to migrate to HTTP/2 instead. - + @@ -183393,7 +183409,7 @@ This is sort of true. One of the primary drivers behind the http2 development is the fixing of HTTP pipelining. If your use case originally didn't have any need for pipelining then chances are http2 won't do a lot of good for you. It certainly isn't the only improvement in the protocol but a big one. As soon as services start realizing the full power and abilities the multiplexed streams over a single connection brings, I suspect we will see more application use of http2. Small REST APIs and simpler programmatic uses of HTTP 1.x may not find the step to http2 to offer very big benefits. But also, there should be very few downsides with http2 for most users. - + @@ -183407,7 +183423,7 @@ Not at all. The multiplexing capabilities will greatly help to improve the experience for high latency connections that smaller sites without wide geographical distributions often offer. Large sites are already very often faster and more distributed with shorter round-trip times to users. - + @@ -183427,7 +183443,7 @@ Many Internet users have expressed a preference for TLS to be used more widely and we should help to protect users' privacy. Experiments have also shown that by using TLS, there is a higher degree of success than when implementing new plain-text protocols over port 80 as there are just too many middle boxes out in the world that interfere with what they would think is HTTP 1.1 if it goes over port 80 and might look like HTTP at times. Finally, thanks to http2's multiplexed streams over a single connection, normal browser use cases still could end up doing substantially fewer TLS handshakes and thus perform faster than HTTPS would when still using HTTP 1.1. - + @@ -183442,7 +183458,7 @@ Yes, we like being able to see protocols in the clear since it makes debugging and tracing easier. But text based protocols are also more error prone and open up for much more parsing and parsing problems. If you really can't take a binary protocol, then you couldn't handle TLS and compression in HTTP 1.x either and its been there and used for a very long time. - + @@ -183460,7 +183476,7 @@ http://www.neotys.com/blog/performance-of-spdy-enabled-web-servers/ by Hervé Servy) and such experiments have been repeated with http2 as well. I'm looking forward to seeing more such tests and experiments getting published. A basic first test made by httpwatch.com15 http://blog.httpwatch.com/2015/01/16/a-simple-performance-comparison-of-https-spdy-and-http2/ might imply that HTTP/2 holds its promises. http2 is demonstrably faster in some scenarios, especially with high latency connections involving many objects and as shown in previous sections, the trend is heading towards even more objects and more data per site. - + @@ -183474,7 +183490,7 @@ Seriously, that's your argument? Layers are not holy untouchable pillars of a global religion and if we've crossed into a few gray areas when making http2 it has been in the interest of making a good and effective protocol within the given constraints. - + @@ -183488,7 +183504,7 @@ That's true. With the specific goal of maintaining HTTP/1.1 paradigms there were several old HTTP features that had to remain. Such as the common headers that also include the often dreaded cookies, authorization headers and more. But by the upside of maintaining these paradigms is that we got a protocol that is possible to deploy without an inconceivable amount of upgrade work that requires fundamental parts to be completely replaced or rewritten. Http2 is basically just a new framing layer. - + @@ -183506,14 +183522,14 @@ https://github.com/h2o/h2o is a new blazingly fast HTTP server with http2 support that shows potential. Some of the biggest proxy vendors, including HAProxy, Squid and Varnish have expressed their intentions to support http2. I think there is likely to be even more implementations popping up once the spec becomes a ratified RFC. - In late January 2015, after Firefox 35 had shipped with HTTP/2 enabled by default and Chrome 40 had it enabled for 2% of its users, Google reported early and not statistically safe numbers to me. They then saw HTTP/2 in roughly 5% of their global traffic. At the same time, Firefox 35 responses recorded HTTP/2 in about 9% of all responses. - + In late January 2015, after Firefox 35 had shipped with HTTP/2 enabled by default and Chrome 40 had it enabled for 2% of its users, Google reported early and not statistically safe numbers to me. They then saw HTTP/2 in roughly 5% of their global traffic. At the same time, Firefox 35 responses recorded HTTP/2 in about 9% of all responses. By early May, Google's number was up to 18% and Firefox used HTTP/2 in 10% of the requests. + http2 in Firefox Firefox has been tracking the drafts very closely and has provided http2 test implementations for many months. During the development of the http2 protocol, clients and servers have to agree on what draft version of the protocol they implement which makes it slightly annoying to run tests. Just be aware so that your client and server agree on what protocol draft they implement. - + @@ -183524,7 +183540,7 @@ In all Firefox versions since version 35, released January 13th 2015, http2 support is enabled by default. Enter 'about:config' in the address bar and search for the option named “network.http.spdy.enabled.http2draft”. Make sure it is set to true. Firefox 36 added another config switch named “network.http.spdy.enabled.http2which is set true by default. The latter one controls the “plain” http2 version while the first one enables and disables negotiation of http2-draft versions. Both are true by default since Firefox 36. - + @@ -183534,7 +183550,7 @@ Remember that Firefox only implements http2 over TLS. You will only ever see http2 in action with Firefox when going to https:// sites that offer http2 support. - + @@ -187534,27 +187550,27 @@ There is no UI element anywhere that tells that you're talking http2. You just can't tell that easily. One way to figure it out, is to enable “Web developer->Network” and check the response headers and see what you got back from the server. The response is then “HTTP/2.0” something and Firefox inserts its own header called “X-Firefox-Spdy:” as shown in the screenshot above. The headers you see in the Network tool when talking http2 have been converted from http2's binary format into the old-style HTTP 1.x look-alike headers. - + - Visualize HTTP/2 use + Visualize http2 use - There are Firefox plugins available that help visualize if a site is using HTTP/2. One of them is “SPDY Indicator”17 + There are Firefox plugins available that help visualize if a site is using http2. One of them is “SPDY Indicator”17 https://addons.mozilla.org/en-US/firefox/addon/spdy-indicator/. - + http2 in Chromium - The Chromium team has implemented HTTP/2 and provided support for it in the dev and beta channel for a long time. Starting with Chrome 40, released on January 27th 2015, http2 is enabled by default for a certain amount of users. The exact amount started off really small but is planned to increase gradually over time. - SPDY support will be removed. In a blog post, the project announced in February 201518 + The Chromium team has implemented http2 and provided support for it in the dev and beta channel for a long time. Starting with Chrome 40, released on January 27th 2015, http2 is enabled by default for a certain amount of users. The amount started off really small and then increased gradually over time. + SPDY support will eventually be removed. In a blog post, the project announced in February 201518 http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html: Chrome has supported SPDY since Chrome 6, but since most of the benefits are present in HTTP/2, it’s time to say goodbye. We plan to remove support for SPDY in early 2016” - + @@ -187564,7 +187580,7 @@ Enter “chrome://flags/#enable-spdy4" in your browser's address bar and click “enable” if it isn't already showing it as enabled. - + @@ -187574,7 +187590,7 @@ Remember that Chrome only implements http2 over TLS. You will only ever see http2 in action with Chrome when going to https:// sites that offer http2 support. - + @@ -187585,7 +187601,17 @@ There are Chrome plugins available that helps visualize if a site is using HTTP/2. One of them is “SPDY Indicator”19 https://chrome.google.com/webstore/detail/spdy-indicator/mpbpobfflnpcgagjijhmgnchggcjblin. - + + + + + QUIC + + + + + Chrome's current experiments with QUIC (see section 12.1) dilute the HTTP/2 numbers somewhat. + http2 in curl @@ -187594,7 +187620,7 @@ In the spirit of curl, we intend to support just about every aspect of http2 that we possibly can. curl is often used as a test tool and tinkerer's way to poke on web sites and we intend to keep that up for http2 as well. curl uses the separate library nghttp220 https://nghttp2.org/ for the http2 frame layer functionality. - + @@ -187604,7 +187630,7 @@ Internally, curl will convert incoming http2 headers to HTTP 1.x style headers and provide them to the user, so that they will appear very similar to existing HTTP. This allows for an easier transition for whatever is using curl and HTTP today. Similarly curl will convert outgoing headers in the same style. Give them to curl in HTTP 1.x style and it will convert them on the fly when talking to http2 servers. This also allows users to not have to bother or care very much with which particular HTTP version that is actually used on the wire. - + @@ -187614,7 +187640,7 @@ curl supports http2 over standard TCP via the Upgrade: header. If you do a HTTP request and ask for HTTP 2, curl will ask the server to update the connection to http2 if possible. - + @@ -187625,7 +187651,7 @@ curl supports a wide range of different TLS libraries for its TLS back-end, and that is still valid for http2 support. The challenge with TLS for http2's sake is the APLN support and to some extent NPN support. Build curl against modern versions of OpenSSL or NSS to get both ALPN and NPN support. Using GnuTLS or PolarSSL you will get ALPN support but not NPN. - + @@ -187635,7 +187661,7 @@ To tell curl to use http2, either plain text or over TLS, you use the --http2 option (that is “dash dash http2”). curl still defaults to HTTP/1.1 so the extra option is necessary when you want http2. - + @@ -187645,7 +187671,7 @@ Your application would use https:// or http:// URLs like normal, but you set curl_easy_setopt's CURLOPT_HTTP_VERSION option to CURL_HTTP_VERSION_2 to make libcurl attempt to use http2. It will then do a best effort and do http2 if it can, but otherwise continue to operate with HTTP 1.1. - + After http2 @@ -187653,7 +187679,7 @@ A lot of tough decisions and compromises have been made for http2. With http2 getting deployed there is an established way to upgrade into other protocol versions that work which lays the foundation for doing more protocol revisions ahead. It also brings a notion and an infrastructure that can handle multiple different versions in parallel. Maybe we don't need to phase out the old entirely when we introduce new? http2 still has a lot of HTTP 1 “legacy” brought with it into the future because of the desire to keep it possible to proxy traffic back and forth between HTTP 1 and http2. Some of that legacy hampers further development and inventions. Perhaps http3 can drop some of them? What do you think is still lacking in http? - + @@ -187662,36 +187688,36 @@ - Googles QUIC21 + Googles QUIC21 https://www.chromium.org/quic (Quick UDP Internet Connections) protocol is a very interesting experiment, performed much in the same style and spirit as they did with SPDY. QUIC is a TCP + TLS + SPDY replacement implemented using UDP. - QUIC allows the creation of connections with much less latency, it solves packet loss to only block individual streams instead of all of them like it does for HTTP/2 and it makes connections possible to be done over different network interfaces easily - thus also covering areas MPTCP is meant to solve. - QUIC is so far only implemented by Google in Chrome and their server ends and that code is not easily re-used elsewhere, even if there's a libquic22 - https://github.com/devsisters/libquic effort trying exactly that. The specification is still vague and changes rapidly. They've mentioned taking it through standardization and IETF but it has not happened yet. - + QUIC allows the creation of connections with much less latency, it solves packet loss to only block individual streams instead of all of them like it does for HTTP/2 and it makes connections possible to be done over different network interfaces easily - thus also covering areas MPTCP is meant to solve. + QUIC is so far only implemented by Google in Chrome and their server ends and that code is not easily re-used elsewhere, even if there's a libquic22 + https://github.com/devsisters/libquic effort trying exactly that. The specification is still vague and changes rapidly. They've mentioned taking it through standardization and IETF but it has not happened yet. + - Further reading + Further reading If you think this document was a bit light on content or technical details, here are additional resources to help you satisfy your curiosity: - The HTTPbis mailing list and its archives: http://lists.w3.org/Archives/Public/ietf-http-wg/ + The HTTPbis mailing list and its archives: http://lists.w3.org/Archives/Public/ietf-http-wg/ - The actual http2 specification drafts and associated documents from the HTTPbis group: http://datatracker.ietf.org/wg/httpbis/ + The actual http2 specification drafts and associated documents from the HTTPbis group: http://datatracker.ietf.org/wg/httpbis/ - Firefox http2 networking details: https://wiki.mozilla.org/Networking/http2 + Firefox http2 networking details: https://wiki.mozilla.org/Networking/http2 - curl http2 implementation details: http://curl.haxx.se/dev/readme-http2.html + curl http2 implementation details: http://curl.haxx.se/dev/readme-http2.html - The http2 web site: http://http2.github.io/ and perhaps in particular the FAQ: http://http2.github.io/faq/ - + The http2 web site: http://http2.github.io/ and perhaps in particular the FAQ: http://http2.github.io/faq/ + - + Thanks