Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Main text #1
Thank you! If you get through this wall of text, there's a beverage of your choice waiting for you at the bottom next time we meet ;)
Please use the github commenting functionality to provide feedback (if possible with quotes from the text) and maybe read other people's comments as well.
Feedback that I'm looking for:
Let out your inner pedant! Bad feedback is also feedback! I have a tough skin! I'd rather go cry in a corner than make false claims!
QUIC and HTTP/3 : Too big to fail?!
The new QUIC and HTTP/3 protocols are coming and they are the bee's knees!
1. End-to-end encrypted UDP you say?
One of the big selling points of QUIC is its end-to-end encryption. Where in TCP much of the transport-specific information is out in the open and only the data is encrypted, QUIC encrypts almost everything and applies integrity protection (see Figure X). This leads to improved privacy and security and prevents middleboxes in the network from tampering with the protocol. This last aspect is one of the main reasons for the move to UDP: evolving TCP was too difficult in practice because of all the disparate implementations and parsers.
Network operators and the spin bit
The downside is that network operators now have much less to work with to try and optimize and manage their network. They no longer know if a packet is an acknowledgment or a re-transmit, cannot self-terminate a connection and have no other way from impacting congestion control/send rate than to drop packets. It is also more difficult to assess for example the round-trip-time (RTT) of a given connection (which, if rising, is often a sign of congestion or bufferbloat).
There has been much discussion about adding some of these signals back into a visible-on-the-wire part of the QUIC header (or using other means), but the end result is that just a single bit will be exposed for RTT measurement: the "spin" bit. The concept is that this bit will change value about once every round trip, allowing middleboxes to watch for the changes and estimate RTTs that way, see Figure Y (more bits could lead to added resolution etc., read this excellent paper). While this helps a bit, it still limits the operators considerably, especially with initial signals being that Chrome and Firefox will not support the spin bit. The only other option QUIC will support is "Explicit Congestion Notification", which uses flags at the IP-level to signal congestion.
UDP blocking and alt-svc with fallbacks
I don't know about you, but if I were a network operator (or nefarious dictatorship), I would be sorely tempted to just block QUIC wholesale if I'm doing any type of TCP optimization or use special security measures. It wouldn't even be that difficult for web-browsing: nothing else runs on UDP:443 (whereas blocking TCP:443 would lead to much mayhem). While deploying QUIC, google actually looked at this, to know how many networks just blocked UDP/QUIC already. They (and other research) found 3-5% of networks currently do not allow QUIC to pass. That seems fine, but these figures (probably) don't include a lot of corporate networks, and the real question is: will it remain that way? If QUIC gets bigger, will (some) networks not start actively blocking it (at least until they update their firewalls and other tools to better deal with it?). "Fun" anecdote: while testing our own QUIC implementation's public server (based in Belgium) with the excellent quic-tracker conformance testing tool, most of the tests suddenly started failing when the tool moved to a server in Canada. Further testing confirmed some IP-paths are actively blocking QUIC traffic, causing the test failures.
The thing is that blocking QUIC (e.g., in a company's firewall) wouldn't even break anything for web-browsing end users; sites will still load. As browsers (and servers!) have to deal with blocked UDP anyway, they will always include a TCP-based fallback (in practice, Chrome currently even races TCP and QUIC connections instead of waiting for a QUIC timeout). Servers will use the alt-svc mechanism to signal QUIC support, but browsers can only trust that to a certain extent because a change of network might suddenly mean QUIC becomes blocked. QUIC-blocking company network administrators won't get angry phone calls from their users and will still be able to have good control over their setup, what's not to like? They also won't need to run and maintain a separate QUIC/H3 stack next to their existing HTTP(/2) setup..
Finally, one might ask: why would a big player such as Google then want to deploy QUIC on their network if they loose flexibility? In my assessment, Google (and other large players) are mostly in full control of (most of) their network, from servers to links to edge points-of-presence, and have contracts in place with other network operators. They know more or less exactly what's going on and can mitigate network problems by tweaking load balancers, routes or servers themselves. They can also do other shenanigans, such as encode information in one of the few non-encrypted fields in QUIC: the connection-ID. This field was explicitly allowed to be up to 18 bytes long to allow encoding (load-balancing) information inside. They could also conceivably add additional headers to their packets, stripping them off as soon as traffic leaves the corporate network. As such, the big players lose a bit, but not much. The smaller players or operators of either only servers or only the intermediate networks stand to lose more.
2. CPU issues
As of yet, QUIC is fully implemented in user-space (as opposed to TCP, which typically lives in kernel-space). This allows fast and easy experimentation, as users don't need to upgrade their kernels with each version, but also introduces severe performance overheads (mainly due to user-to-kernel-space communication) and potential security issues.
In their seminal paper, Google mentions their server-side QUIC implementation uses about 2x as much CPU as the equivalent TCP+TLS stack. This is already after some optimizations, but not full kernel bypass (e.g., with DPDK or netmap). Let me put that another way: they would need roughly twice the server hardware to serve the same amount of traffic! They also mention diminished performance on mobile devices, but don't give numbers. Luckily, another paper describes similar mobile tests and finds that QUIC is mostly still faster than TCP but "QUIC's advantages diminish across the board", see Figure Z. This is mainly because QUIC's congestion control is "application limited" 58% of the time (vs 7% on the desktop), meaning the CPU simply cannot cope with the large amount of incoming packets.
IoT and TypeScript
One of the oft-touted use cases for QUIC is in Internet-of-Things (IoT) devices, as they often need intermittent (cellular) network access and low-latency connection setup, 0-RTT and better loss resilience are quite interesting in those cases. However, those devices often also have quite slow CPUs.. There are many issues where QUIC's designers mention the IoT use case and how a certain decision might impact this, though as far as I know there is no stack that has been tested on such hardware yet. Similarly, many issues mention taking into account a hardware QUIC implementation, but at my experience level it's unclear if this is more wishful thinking and handwaving rather than a guarantee.
I am a co-author of a NodeJS QUIC implementation in TypeScript, called Quicker. This seems weird given the above, and indeed, most other stacks are in C/C++, Rust or Go. We chose TypeScript specifically to help assess the overhead and feasability of QUIC in a scripting language and, while it's still very early, it's not looking too well for now, see Figure A.
3. 0-RTT usefulness in practice
Another major QUIC marketing feature (though it's actually from TLS 1.3) is 0-RTT connection setup: your initial (HTTP) request can be bundled with the first packet of the handshake and you can get data back with the first reply, superfast!
However, there is a "but" immediately: this only works with a server that we've previously connected to with a normal, 1-RTT setup. 0-RTT data in the second connection is encrypted with something called a "pre-shared secret" (contained in a "new session ticket"), which you obtain from the first connection. The server also needs to know this secret, so you can only 0-RTT connect to that same server, not say, a server in the same cluster (unless you start sharing secrets or tickets etc.). This means, again, that load balancers should be smart in routing requests to correct servers. In their original QUIC deployment, Google got this working in 87% (desktop) - 67% (mobile) of resumed connections, which is quite impressive, especially since they also required users to keep their original IP addresses.
There are other downsides as well: 0-RTT data can suffer from "replay attacks", where the attacker copies the initial packet and sends it again (several times). Due to integrity protection, the contents cannot be changed, but depending on what the application-level request carries, this can lead to unwanted behaviour if the request is processed multiple times (e.g., POST bank.com?addToAccount=1000). Thus, only what they call "idempotent" data can be sent in 0-RTT (meaning it should not permanently change state, e.g., HTTP REST GET but not PUT). Depending on the application, this can severely limit the usefulness of 0-RTT (e.g., a naive IoT sensor using 0-RTT to POST sensor data could, conceptually, be a bad idea).
Lastly, there is the problem of IP address spoofing and the following UDP amplification attacks. In this case, the attacker pretends to be the victim at IP a.b.c.d and sends a (small) UDP packet to the server. If the server replies with a (much) larger UDP packet to a.b.c.d, the attacker needs much less bandwidth to generate a large attack on the victim, see Figure B. To prevent this, QUIC adds two mitigations: the client's first packet needs to be at least 1200 bytes (max practical segment size is about 1460) and the server MUST NOT send more than three times that in response without receiving a packet from the client in response (thus "validating the path", proving the client is not a victim of an attack). So just 3600-4380 bytes, in which the TLS handshake and QUIC overhead is also included, leaves little space for an (HTTP) response (if any). Will you send the HTML
The final nail in QUIC's coffin is that TCP + TLS 1.3 (+ HTTP/2) can also use 0-RTT with the TCP "Fast Open" option (albeit with the same downsides). So picking QUIC just for this feature is (almost) a non-argument.
4. QUIC v126.96.36.199.5.69-Facebook
As opposed to TCP, QUIC integrates a full version negotiation setup, mainly so it can keep on evolving easily without breaking existing deployments. The client uses its most preferred supported version for its first handshake packet. If the server does not support that version, it sends back a Version Negotiation packet, listing supported versions. The client picks one of those (if possible) and retries the connection. This is needed because the binary encoding of the packet can change between versions.
Every RTT is one too many
As follows from the above, each version negotiation takes 1 RTT extra. This wouldn't be a problem if we have a limited set of versions, but the idea seems to be that there won't be just for example 1 official version per-year, but a slew of different versions. One of the proposals was (is?) even to use different versions to indicate support for a single feature (the previously mentioned spin bit). Another goal is to have people use a new version when they start experimenting with different, non-standardized features. This all can (will?) lead to a wild-wild-west situation, where every party starts running their own slighty different versions of QUIC, which in turn will increase the amount of instances that version negotiation (and the 1 RTT overhead) occurs. Taking this further, we can imagine a dystopia where certain parties refuse to move to new standardized versions, since they consider their own custom versions superior. Finally, there is the case of drop-and-forget scenarios, for example in the Internet-of-Things use case, where updates to software might be few and far between.
A partial solution could potentially be found in the transport parameters. These values are exchanged as part of the handshake and could be used to enable/disable features. For example, there is already a parameter to toggle connection migration support. However, it's not yet clear if implementers will lean to versioning or adding transport parameters in practice (though I read more of the former).
It seems a bit strange to worry about sometimes having a 1-RTT version negotiation cost, but for a protocol that markets a 0-RTT connection setup, it's a bit contradictory. It is not inconceivable that clients/browsers will choose to always attempt the first connection at the lowest supported QUIC version to minimize the 1-RTT overhead risk.
5. Fairness in Congestion Control
The fact that QUIC is end-to-end encrypted, provides versioning and is implemented in user space provides a never-before seen amount of flexibility. This really shines when contemplating using different congestion control algorithms (CCAs). Up until now, CCAs were implemented in the kernel. You could conceivably switch which one you used, but only for your entire server at the same time. As such, most CCAs are quite general-purpose, as they need to deal with any type of incoming connection. With QUIC, you could potentially switch CCA on a per-connection basis (or do CC across connections!) or at least more easily experiment with different (new) CCAs. One of the things I want to look at is using the NetInfo API to get the type of incoming connection, and then change the CCA parameters based on that (e.g., if you're on a gigabit cable, my first flight will be 5MB instead of 14KB, because I know you can take it).
The previous example clearly highlights the potential dangers: if anybody can just decide what to do and tweak their implementations (without even having to recompile the kernel- madness!), this opens up many avenues for abuse. After all, an important part of congestion control is making sure each connection gets a more or less equal share of the bandwidth, a principle called fairness. If some QUIC servers start deploying a much more aggressive CCA that grabs more than its equal share of bandwidth, this will slow down other, non-QUIC connections and other QUIC connections that use a different CCA.
Nonsense, you say! Nobody would do that, the web is a place of gentlepeople! Well... Google's version of QUIC supports two congestion control algorithms: TCP-based CUBIC and BBR. There is some conflicting information, but at least some sources indicate their CCA implementations are severely unfair to "normal" TCP. One paper, for example, found that QUIC+CUBIC used twice the bandwidth of 4 normal TCP+CUBIC flows combined. Another blogpost shows that TCP+BBR could scoop up two-thirds of the available bandwidth, see Figure C. This is not to say that Google actively tries to slow down other (competing) flows, but it shows rather well the risks with letting people easily choose and tweak their own CCAs. Worst case, this can lead to an "arms race" where you have to catch up and deploy ever more aggressive algorithms yourself, or see your traffic drowned in a sea of QUIC packets. Yet another potential reason for network operators to block or severely hamper QUIC traffic.
Another option is of course that a (small) implementation error causes your CCA to perform suboptimally, slowing down your own traffic. Seeing as all these things have to be re-implemented from scratch, I guarantee these kinds of bugs will pop up. Since congestion control can be very tricky to debug, it might be a while before you notice. For example, when working on their original QUIC implementation, Google uncovered an old TCP CUBIC bug and saw major improvements for both TCP and QUIC after fixing it.
6. Too soon and too late
QUIC has been around for quite a long time: starting as a Google experiment in 2012 (gQUIC), it was passed on to the IETF for standardization (iQUIC) in 2015 after a decent live deployment at scale, proving its potential. However, even after 6 years of design and implementation, QUIC is far from (completely) ready. The IETF deadline for v1 had already been extended to November 2018 and has now been moved again to July 2019. While most large features have been locked down, even now changes are being made that lead to relatively major implementation iterations. There are over 15 independent implementations, but only a handful implement all advanced features at the transport layer. Even fewer (two at the moment) implement a working HTTP/3 mapping. Since there are major low-level differences between gQUIC and iQUIC, it is as of yet unclear if results from the former will hold true in the latter. This means the theoretical design is maybe almost finished, but implementations remain relatively unproven (though Facebook claims to already be testing QUIC+HTTP/3 for some internal traffic). There is also not a single (tested) browser-based implementation yet, though Apple, Microsoft, Google and Mozilla are working on IETF QUIC implementations and we ourselves have started a POC based on Chromium.
Too (much too) soon
This is problematic because the interest in QUIC is rising, especially after the much talked-about name-change from HTTP-over-QUIC to HTTP/3. People will want to try it out as soon as possible, potentially using buggy and incomplete implementations, in turn leading to sub-par performance, incomplete security and unexpected outages. People will in turn want to debug these issues, and find that there are barely any advanced tools or frameworks that can help with that. Most existing tools are tuned for TCP or don't even look at the transport layer and QUIC's layer-spanning nature will make debugging cross-layer (e.g., combining 0-RTT with H3 server push) and complex (e.g., multipath, forward error correction, new congestion control) issues difficult . This is in my opinion an extensive issue; so extensive that I've written a full paper on it, which you can read here. In it, I advocate for a common logging format for QUIC which allows creating a set of reusable debugging and visualization tools, see Figure D.
As such, there is a risk that QUIC and its implementations will not be ready (enough) by the time the people want to start using it, meaning the "Trough of Disillusionment" may come too early and broad deployment will be delayed years. In my opinion, this can also be seen in how CDNs are tackling QUIC: Akamai, for example, decided not to wait for iQUIC and instead has been testing and deploying gQUIC for a while. LiteSpeed burns at both ends of the candle, supporting both gQUIC and pioneering iQUIC. On the other hand though, Fastly and Cloudflare are betting everything on just iQUIC. Make of it what you will.
Too (little too) late
While QUIC v1 might be too early, v2 might come too late. Various advanced features (some of which were in gQUIC), such as forward error correction, multipath and (partial) unreliability are intentionally kept out of v1 to lower the overall complexity. Similarly, major updates to HTTP/3, such as to how cookies work, are left out. In my opinion, H3 is a very demure mapping of HTTP/2 on top of QUIC, with only minor changes. While there are good reasons for this, it means many opportunities for which we might want to use QUIC have to be postponed even longer.
The concept of separating QUIC and HTTP/3 is so that QUIC can be a general-purpose transport protocol, able to carry other application layer data. However, I always struggle to come up with concrete examples for this... WebRTC is often mentioned, and there was a concrete DNS-over-QUIC proposal, but are there any other projects ongoing? I wonder if there would be more happening in this space if some of the advanced features would be in v1. The fact that the DNS proposal was postponed to v2 surely seems to indicate so.
I think it will be difficult to sell QUIC to laymen without these types of new features. 0-RTT sounds nice, but is possibly not hugely impactful, and could be done over TCP. Less Head-of-Line blocking is only good if you have a lot of packet loss. Added security and privacy sounds nice to users, but has little added value besides their main principle. Google touts 3-8% faster searches: is that enough to justify the extra server and setup costs? Does QUIC v1 pack enough of a punch?
If you've made it through all that: welcome to the end! Sit, have a drink!
I imagine there will be plenty of different feelings across readers at this point (besides exhaustion and dehydration) and that some QUIC collaborators might be fuming. However, keep in mind what I stated in the beginning: this is me trying to take a "Devil's Advocate" viewpoint, trying to weed out logical errors in arguments pro and con QUIC. Most (all?) of these issues are known to the people who are standardizing QUIC and all their decisions are made after (very) exhaustive discussion and argumentation. I probably even have some errors and false information in my text somewhere, as I'm not an expert on all subtopics (if so, please let me know!). That is exactly why the working groups are built up out of a selection of people from different backgrounds and companies: to try and take as many aspects into consideration as possible. Trade-offs are made, but always for good reasons.
That being said, I still think QUIC might fail. I don't think the chance is high, but it exists. Conversely, I also don't think there is a big chance it will succeed from the start and immediately gain a huge piece of the pie with a broader audience outside of the bigger companies. I think the chance is much higher that it fails to find a large uptake at the start, and that it instead has to gain a broad deployment share more slowly, over a few years. I think this will be slower than what we've seen with HTTP/2, but (hopefully) faster than IPv6.
I personally still believe strongly in QUIC (I should, I'm betting my PhD on it...). It's the first major proposed change on the transport layer that might actually work in practice (the arguments in this post are several times worse and more extensive for many previous options). I feel very grateful to have the chance to witness QUIC's standardization and deployment up close. As it is made to evolve, I think it has all the potential to survive a slower uptake, and remain relevant for decades. The bigger companies will deploy it, debug it, improve it, open source it, and in 5 years time more stuff will be running on QUIC than on TCP.
One potential article to include in your resources list at the start of the document is @Errata-Security's Some notes about HTTP/3. A better link for Cloudflare might be https://cloudflare-quic.com/ which points to the write-up you currently have plus additional resources.
Very interesting post! Enjoyed reading.
But since you asked for me inner pedant, here goes:
Give him his real name instead of his Twitter handle?
Also is the TCP stack so optimized that a doubling is inconsequential except when dealing at huge volume (like Google or a CDN)?
TLS adds a lot of computational complexity but it’s not really noticeable to most web servers and clients, so will this be?
Genuine questions as don’t know the answer here. Or should this be in the counter arguments section?
This should be “use case”.
On similar note unsure of counterarguments versus counter arguments. Google says both are acceptable (and also counter-arguments) but I prefer counter arguments (as does iOS auto correct btw!).
Source? I am of the opinion that many IoT devices will likely stay with the simpler HTTP/1.1, but that’s just an opinion with no backing!
Is this section needed? It’s very Interesting but not QUIC specific. If looking to cut down the size of this post you could skip this completely. If happy to leave at this length then leave in as think it is interesting. It’s also kind of needed to set context for the next 1-RTT section.
Shouldn’t there be an additional one, that it’s in TLSv1.3 over TCP too? Or did I miss some QUIC specific stuff here?
Replace comma with dash?
Is the sub supposed to be in bold?
:-) Love this!
Bit strong. What about “users probably won’t notice most bugs”?
An excellent text with lots of good points!
A casual note that I also mentioned on twitter:
In Firefox, we simply gave up getting TFO enabled. It just fails too often and works too rarely and often causes delays (when some middlebox just throws away the SYN and similar). So "can just use TFO" is quite a simplification. I believe QUIC's early data has a much higher chance of actually working...
Some implementations of QUIC (such as gQUIC) support BBR from the sender's side which helps in mitigating bufferbloat.
As of yet, QUIC is fully implemented in user-space (as opposed to TCP, which typically lives in kernel-space). This allows fast and easy experimentation, as users don't need to upgrade their kernels with each version, but also introduces severe performance overheads (mainly due to user-to-kernel-space communication).
Aside from CPU Issues, there are some security concerns about the Implementations in user-space. One of which is that any server process relying on quic has the same user-id as the QUIC stack. OpenBSD developers such as Reyk Floeter have expressed concerns about lack of privilege separation in the design of quic implementations. [See: https://twitter.com/reykfloeter/status/1064794652065361920].
Trivial point but, in erudite publications, it's standard practice to provide the expanded definition of abbreviations and acronyms on first occurrence. I, for one, couldn't recall what QUIC stands for.
Indeed, you've done it for round-trip-time (RTT) but not QUIC; the subject of your piece.
Not as easy as it sounds!
I find Figure A rather confusing.
DISCLAIMER: These are all thoughts and opinions from a QUIC novice that don't have anything in the way of evidence to support them. I'm also developing the POC HTTP/3 implementation in Chromium as my masters thesis together with Robin as my supervisor and mentor.
End-to-end encrypted UDP you say?
You say the only way to impact congestion control/send rate will be to drop packets but a little further you say:
Couldn't network operators also tweak ECN to impact congestion control/send rate? I'm under the impression that a middlebox could be tweaked in when it will set the ECN flag so that should be under the control of network operator.
I'm not entirely sure why network admins would need to run a HTTP/2 stack or a QUIC/H3 stack. Are those needed for the sites hosted by the network admins themselves? Everything HTTP/2 and QUIC is encrypted so I can't really imagine what such a stack would be doing in the context of network administration (unless MITM is implied?).
I might be wrong but I feel this paragraph doesn't really belong under the
This feels like an incredibly general statement to make when the explanation provided by the paper for the mobile results is the following:
This is a very short explanation for an issue that could probably have an entire paper dedicated to it. I'd reduce the statement to saying that when using Google's current gQUIC implementation in userspace on mobile, the CPU can't handle the amount of incoming packets (are we even sure that's the cause since Google mentioned encryption as another major cause of overhead).
I think this is a great point to make. Another possible issue with IOT devices is that, Should QUIC be used on IOT devices, I suspect that many of those devices won't ever support another QUIC version than the one they're deployed with which raises more questions with the idea that QUIC doesn't have to get everything right now since it can just be fixed in a future version.
0-RTT usefulness in practice
You mention that smaller players might not have the luxury to deploy other measures to UDP amplification. One question I'm asking myself here is if most companies would even care that their server might be used to DDOS someone across the globe as long as it doesn't effect their server or their users. I could definitely see companies decide to ignore the amplification limit if the benefit gained is substantial enough (not caring about the fact their servers might be used in amplification attacks).
I went through the paper and didn't really find a comparison of HTTP/2 with a lower initial congestion window against HTTP/2 with a higher initial congestion window. They used a higher initial congestion window than normal but as far as I can see they didn't compare HTTP/2 against HTTP/2 with different initial congestion window sizes.
I'd add here that the extra RTT introduced by using a version unsupported by the server might even result in browsers sticking to an older version to avoid the 1-RTT overhead. Another option is that they continue as they've been doing for QUIC and TCP and start racing connections for multiple QUIC versions to the server to avoid the 1-RTT overhead (I'm not 100% sure if this is possible in QUIC). What's worse, as far as I can deduce from the spec, a server doesn't report it's supported versions if the client picks a version the server
Also, should the versions supported by browsers ever diverge, you could end up in situations where a website is "only supported on Chrome" because the server only works with the QUIC version supported by Chrome. Should Chrome and Firefox decide not to implement the spin bit (as mentioned) and Edge decide to implement it we'll already have an example of divergence between browsers (although to my knowledge it wouldn't be a difference in version yet).
All these situations are of course hypothetical and involve humans and companies which are unpredictable so I have no idea how likely these scenario's are to happen in practice.
Fairness in Congestion Control
I'd consider the train of thought if we don't consider fairness a responsibility of the congestion control algorithm anymore. Why should the congestion control algorithm used by another connection be allowed to negatively impact my own congestion control. Instead, what would happen if we made middleboxes and the kernel responsible for fairly dividing the available bandwidth? If QUIC is abusing congestion control and limiting TCP connections, shouldn't the kernel or a middlebox stop it from doing so? This is more of a thought I had when reading this rather than an actual suggestion.
This point is very true. In Google's QUIC paper, they mention fixing a bug in CUBIC that drastically increased QUIC's performance and even TCP's performance after it was fixed in the Linux kernel as well. That might be an interesting example to mention here.
Too soon and too late
Do you have a source for the Google IETF QUIC implementation work? I know there's some references to IETF QUIC in the Chromium codebase but I have zero insight on their progress. I assume mozquic is the Mozilla implementation and Apple and Microsoft's are closed source?
I feel this is more because QUIC hasn't been standardized, let alone deployed anywhere yet. iQUIC still is relatively untested. I'd prefer for its design to be validated to some extent before we start designing every other protocol on top of it. As for examples, DNS over QUIC might be a good candidate since you'd get the privacy properties of DNS over TLS along with 0-RTT performance. In fact, 0-RTT might be a very good match with DNS since DNS responses are usually very small.
This sounds way too harsh in my opinion. @bagder already mentioned the difficulties with deploying TFO in Firefox and more options might be available if we start playing with the amplification limit. You even say yourself you're looking forward to investigate what's possible in the small amount of data allowed by 0-RTT so it seems out of character to dismiss it here so harshly.
If you're saying this I'd definitely include a reference saying that packet loss doesn't occur for the majority of connections. It's very easy for a casual reader to make the argument that lots of computing is moving to mobile where packet loss can be very common.
The post was very interesting to read. The subject and approach is also very appropriate as I think we're seeing a bit too much of "QUIC is going to solve everything" going around so a post focusing on QUIC's problems is sorely needed.
There was a proposal about having DNS over QUIC: https://datatracker.ietf.org/doc/draft-huitema-quic-dnsoquic/. However, it's unlikely to happen with QUIC v1.
The post has been sent for publication. Thanks to everyone here and on twitter for the additional comments; most of them have made it into the text.
I'm going to leave this issue open for possible additional comments on the published version.