quicwg / base-drafts Public
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP/QUIC without Alt-Svc? #253
Comments
|
I'm thinking that the answer is "no" here as well. Developing a way to provide supplementary annotations on links would be less reliable, but ultimately more likely to succeed. |
|
I agree with the opinions about new scheme(s). However, it does seem a shame to have this implicit dependency on Alt-Svc delivered by HTTP. By my understanding of Alt-Svc, the origin would have the requirement of offering resources via HTTP(/TLS)/TCP in perpetuity. That seems a bit unfair to me. I recall some talk about TCP fallback (e.g. at the QUIC BoF IETF 96). No text currently in WG docs seem to require this though. the closest is in the HTTP/QUIC mapping: If TCP fallback is not actually required, and a solution can be found to directly open QUIC connections then there is a route to deprecate HTTP(/TLS)/TCP. Similarly, constrained devices that want to operate without HTTP(/TLS)/TCP could do so, bearing the risk of %N connection failure rate. Forgive this stupid question: is there text somewhere in an RFC that requires a client accessing resources with a Forgive this other stupid question: what about other application-layer protocols over QUIC? Do they also have an implicit dependency on Alt-Svc, or is that totally inappropriate? |
|
Other protocols have their own bootstrapping problems. Too much depends on context to be sure. For instance, migrating something like FTP might be tricky and something akin to Alt-Svc might be the most practical approach. On the other hand, migrating RTP probably won't have problems in this area because it uses a signaling protocol for setup (RTP would have a host of other problems, of course). |
|
On Tue, Jan 31, 2017 at 10:06 PM, Mike Bishop ***@***.***> wrote:
Should we mint new scheme(s) that allows direct reference to a resource
served exclusively over HTTP/QUIC?
I think the answer here is still no for scheme - we don't want 2 different
urls for resources that are supposed to be interchangable (and then the
caching rules are impacted, etc..).
But something like an authenticated SRV is an obvious path to go down
eventually.
|
|
We already have different URLs for things that might/might not be interchangeable. When you use Alt-Svc between an http:// origin and an https:// endpoint, you're declaring that they're either interchangeable or you can properly process the distinction. I can envision several scenarios where either server or client won't want to carry a full HTTP/TLS/TCP stack simply for bootstrapping, when they already know both peers will support HTTP/QUIC. Maybe authenticated SRV is the path forward, but it seems like the simplest would be something like:
Note that I don't expect this to be used in browser-land anytime soon, if ever. httpq:// would be inaccessible to legacy browsers, and you'd be cutting off a substantial portion of the web from following the link. However, I think for non-browser scenarios and for testing, there should be a way to explicitly describe a QUIC endpoint. |
|
Oh, and @LPardue: RFC 2818 says in Section 2.3:
RFC 7230 updates this by saying:
|
|
Also in RFC 7230:
|
|
@MikeBishop thanks for these, really interesting food for thought |
|
On Wed, Feb 1, 2017 at 7:11 PM, Mike Bishop ***@***.***> wrote:
We already have different URLs for things that might/might not be
interchangeable. When you use Alt-Svc between an http:// origin and an
https:// endpoint, you're declaring that they're either interchangeable
or you can properly process the distinction.
Alt-Svc does not contemplate scheme or origins - it deals with routing and
protocol. Alt-Svc cannot change something from http:// to https:// (or vice
versa) nor does it imply anything about whether the content of those urls
differs if only the scheme is different.
I would say HSTS does come closer to what you're describing - but not
quite. It does nicely illustrate the problem of determining equivalence
(sometimes they are, sometimes they aren't) and for me is a pretty good
reason to steer clear. Things white and black lists that https everywhere
needs to deal with are another example.
I think quic would be much better off if it could just stick to the https://
train.
what if we added some 'prior knowledge' language here ala h2?
I can envision several scenarios where either server or client won't want
to carry a full HTTP/TLS/TCP stack simply for bootstrapping, when they
already know both peers will support HTTP/QUIC. Maybe authenticated SRV is
the path forward, but it seems like the simplest would be something like:
well, if they don't have a tcp stack, then they don't have to worry about
fallback.. so why not just try quic? I guess you don't know what versions
to try, but they are unlikely to be encoded in the scheme either..
|
|
On 2 Feb 2017 8:02 a.m., "Patrick McManus" wrote:
well, if they don't have a tcp stack, then they don't have to worry about fallback.. so why not just try quic? I guess you don't know what versions to try, but they are unlikely to be encoded in the scheme either..
But is that kind of behaviour prohibited by the sections of RFC 7230 that Mike quoted?
|
|
I'm thinking about this in terms of clients that don't have tcp support. If
we're really talking about origins that don't have tcp support instead,
then I think a new scheme makes more sense.
On Thu, Feb 2, 2017 at 10:47 AM, Lucas Pardue ***@***.***> wrote:
But is that kind of behaviour prohibited by the sections of RFC 7230 that
Mike quoted?
I tihnk 7230 is defining what http and https schemes mean in terms of
namespaces and default reachability (which goes back to a origin does
indeed need to be able to publish a tcp version in order to use an https
scheme, but it don't require all accesses to happen that way)
This is sort of self evident even ignoring quic, we've already got alt-svc
changing routes and proxies obscuring DNS and addressing, caches which
don't need e2e transport at all , etc.. all of these things get data
identified by the same url via mechanisms that are bootstrapped (sometimes)
outside of the default interpretation..
I don't think a client that doesn't speak tcp is doing anything wrong by
just trying quic on an https url.. A more conservative reading of 7230
might indicate that QUIC for https:// even via alt-svc was non compliant
because it wasn't TCP and I don't think any of us believe that we need to
update 7230 to allow it.
|
|
On Wed, Feb 1, 2017 at 10:19 AM, Mike Bishop ***@***.***> wrote:
Oh, and also in RFC 7230:
Although HTTP is independent of the transport protocol, the "http" scheme
is specific to TCP-based services because the name delegation process
depends on TCP for establishing authority. *An HTTP service based on some
other underlying connection protocol would presumably be identified using a
different URI scheme....*
I read that as requiring a different scheme when the name delegation
process was different. I don't see a different name delegation process as
probably for names served over QUIC (or least I don't see it as required).
If you dipped your toes into the "special use dns names" discussion, you'll
probably also remember that one of reasons TOR wanted .onion was so that
the signal that something should be resolved via TOR could be used within
the authority section of an HTTPS URL. That really did have a different
name delegation process for names below the .onion TLD, but that
consideration was ignored in favor of being able to pass "normal" (really
normal-looking) URLs around.
That experience hints to me that our minting a new scheme for this will
just be ignored in the common case, and I don't see a good reason to
generate the potential confusion as a result.
Just my take on it.
|
|
In the non-conservative case, this seems to me somewhat of an implementation choice. For a client that wants to retrieve https://www.example.org/example.txt: Zero knowledge of hqm availability
Prior knowledge of hqm availability via Alt-SvcA client that has received Alt-Svc indicating hqm for an alternative that is still fresh:
Prior knowledge of hqm availability by other meansI.e. managed networks, whitelists etc
|
|
@mcmanus, I'd agree that we don't need to update 7230 to allow QUIC. The authoritative endpoint for an https origin is a TCP port, and Alt-Svc allows that authoritative endpoint to delegate to different endpoints -- other hosts, other ports, other protocols. But the authoritative endpoint is always TCP to the port given in the URL. That's why we're able to use the same scheme and avoid all the branching of stuff underneath it; the origin hasn't changed. But when there's a service in which the authoritative endpoint is over QUIC -- a device-to-device REST API, or a device's configuration page -- then that requires a different way to express it. I'd be fine with something like https://www.example.com:q443/, except that RFC 2396 restricts the port number to digits only. (It's a little odd, in retrospect, that 2396 describes things in terms of IP and port, with no notion of ports being specific to their transport.) @hardie is right that we're not defining a different name-delegation process here. I'm leary of saying that clients should (or SHOULD, or even MAY) guess that an origin might be available elsewhere without a way to know that. That's a proposition we rejected when discussing how to find TLS-protected equivalents to http:// origins. Sure, we could put a checkbox in the UI or a parameter in the config file that says "'https' doesn't mean what you think it means," then couple that with "prior knowledge" language in the spec. But it just seems cleaner to designate a scheme that's semantically equivalent to 'https' except that the ports in the URI are relative to a different transport protocol. |
|
One example of such an "authoritative endpoint is over QUIC" is a QUIC proxy. That is to say, a semantically "HTTP proxy' which one speaks to via QUIC. Chrome today, for example, can be configured to speak QUIC to a proxy by using the "scheme" "quic" in the proxy.pac function. |
|
Why don't you use ALTSVC at the proxy? It's not like proxy setup is a commonplace action. A new scheme isn't something one does lightly. |
|
On Mon, Feb 6, 2017 at 8:12 PM, Martin Thomson ***@***.***> wrote:
Why don't you use ALTSVC at the proxy? It's not like proxy setup is a
commonplace action. A new scheme isn't something one does lightly.
(Perhaps I used scheme in the wrong way?) We have support for "proxy",
"socks", "socks5" proxy schemes, so adding "quic" was quite straightforward
and is not user visible (though it is visible in the pac file, of course).
In any case, it's not clear to me that Alt-Svc applies to proxies. As I
read the spec, Alt-Svc defines a mechanism for an origin to specify a
different server. I don't think of a proxy as an origin, though I guess one
could? I'd be curious to hear more about this! That being said, in the
context of proxies it seems desirable for users to be able to deploy a QUIC
proxy without needing to also deploy an https proxy.
|
|
An ALTSVC frame should work for the proxy if the intent was to move the proxy. The ALTSVC frame is processed hop-by-hop. I don't know how well that use case has been tested, but it should be possible to advertise an alternative for the proxy origin. And as far as the proxy goes, don't you have to deploy a TCP variant for now and into the foreseeable future if you want to have it work? It's just like any other service, I'd imagine, and the h2 server stack isn't that much extra to have. |
|
I thought we had a discussion about Alt-Svc plus proxies which concluded that Alt-Svc is not for finding proxies: quoth mnot: "Yeah. Alt-Sv is for finding an origin, not for finding a proxy -- a proxy might use it, though. So I don't think Alt-Svc applies here. |
|
Is the QUIC proxy in question actually HTTP/QUIC proxy, or a bit more like
a SOCKS proxy that would tunnel other protocols over QUIC?
…On 8 Feb 2017 00:30, "Ryan Hamilton" ***@***.***> wrote:
I thought we had a discussion about Alt-Svc plus proxies which concluded
that Alt-Svc is not for finding proxies:
httpwg/http-extensions#62
<httpwg/http-extensions#62>
quoth mnot:
"Yeah. Alt-Sv is for finding an origin, not for finding a proxy -- a proxy
might use it, though.
This should all be clear based upon reading of RFC7230, but if not we
could add a sentence or two to clarify."
So I don't think Alt-Svc applies here.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#253 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGRFtZZMp1cJhz7Zqv2ZEr0GyUwvjukNks5raQy7gaJpZM4LzLA5>
.
|
|
It's an "HTTP proxy" that the client speaks to via QUIC, I guess you could say. This is similar to the "https" proxy scheme that chrome supports when it wants to talk to an "HTTP proxy" over a TLS connection (which may result in an HTTP/2 connection to the proxy as the result of ALPN) |
|
Ah so I understand better that there's a slight dichotomy here. If a client
is configured to use an HTTPS proxy, and an origin advertises "hq" what
proxy should the client use? I don't think it's fair to assume that a
single proxy application has to support HTTPS and HTTP/QUIC.
If an HTTPS proxy were to offer an alt-svc itself that points to a
standalone HTTP/QUIC-only proxy, then does that upset things when the
client comes to try an access a new origin with a https scheme?
I think this is channeling some of McManus' earlier comments. If this is
repeating discussion of that old proxy thread then apologies, I'll do some
more background reading.
|
|
@LPardue, yeah, that's an old discussion. A client makes a decision to use the proxy first, which results in ignoring Alt-Svc. See RFC 7838. |
|
We've talked about four routes here:
We explicitly do not consider the same host on different ports equivalent authorities, even if they happen to be listening on both ports with the same cert. Why is TCP 443 vs. UDP 443 any different from TCP 443 vs. TCP 444? (4) seems like a security issue waiting to happen. I'm going to reverse myself and disagree that (3) is undeployable on the web in the near-term. App-to-app handoff on many (most?) platforms now uses custom URI schemes. Apps that encounter unknown URI schemes ask the OS; the OS is able to invoke appropriately-registered apps or tell the user they need to get a capable app. E.g. launching "nonsense://" produces this on Win10: Some cursory testing shows that browsers block navigations to URI schemes that don't have an OS-registered handler. But if you have two browsers, one QUIC-capable and one not, when you click an httpq:// link in the non-QUIC browser the OS will launch the QUIC-capable browser for you and you proceed on your merry way. This seems almost exactly what we'd want to have happen. (2) probably is undeployable, because legacy apps will attempt to parse the URI and declare it invalid. They're semi-used to seeing unknown schemes (xboxliveapp-1297287741://, anyone?), but changes that break the parsers would be seriously painful. |
|
What about extending the HTTP URI syntax in a way to give connection hints? Where in the above example, the Origin (and thus host header and SNI) would come from the part before the "#". On failure to connect, a TCP connect (ie without the hint) could also be used. Cache entries would not include the part following the # as part of the cache key. This is a variant of option 2, but with the difference that the Origin and object cache keys aren't impacted. This is only appropriate for secured connections (eg, TLS / HTTPS). |
|
We can't reasonably change the semantics of any of the fields of the URI: we have to assume that any valid field is in use and that an invalid field would trigger rejection. (e.g., your example syntax would turn into |
|
There are a number of different scenarios and interactions that I think need to be considered in this discussion, I’ll try to capture the two main ones below.
For precision and clarity I come from a CDN background and so will define some terms and concepts I’m using below – they may or may not perfectly match how other parts of the community use the same term hence why I’m going to give a brief description of how I’m using them here.
Client: The user agent which is presenting a URI in order to receive the content from the site that that URI is identifying.
Site: A hostname grouping together a set of paths which identify some resources sourced from one or more origin servers, we can assume the resources identified are the same regardless of delivery protocol (http, https, quic).
Origin Server: An authorative source for the resources within a site. If being fronted by delivery nodes then the origin may not be directly accessible by clients and may not deliver content over the same protocols as the delivery nodes are delivering to the client.
Delivery Node: Client accessible servers capable of serving resources from one or more sites. In a world wide distributed CDN there could be hundreds or even thousands of delivery nodes with request routing being used to direct client to particular nodes. Different delivery nodes may be on different software versions or have different specializations.
Request routing: the process by which a client requesting the URI gets connected to an appropriate delivery node capable of delivering the requested resource. There are a number of different ways that request routing can be implemented:
DNS based Request Routing: the site hostname gets resolved into one or more IP addresses of the delivery nodes, dependant on where you are (and the state of the delivery nodes etc) the list of IP addresses may differ.
HTTP 30x based request routing: The client initially connects to a request routing application, this returns a 302 redirect containing a URL pointing at a specific delivery node (by ip or hostname) and updated path.
Resource based request routing: The results of an API call or contents of a resource provide a URL directing subsequent requests to a specific delivery node. For instance the ‘base-url’ element of a DASH manifest may contain one or more URLS to delivery nodes.
Anycast based request routing: All delivery nodes share the same address and the network routes the connection to the closest node.
One particularly implication of this definition is whilst we could assume that all sites will be dual stack, this doesn’t imply that every delivery node for that site would be capable of being dual stack. This could be some nodes haven’t been upgraded to be dual stack, or some nodes may be specialised just to do HTTPs or just to do quic.
The key scenarios that we need to ensure we have a solution covering:
1. Resource has no knowledge of client capabilities, or delivery node capabilities
This is the case most discussion has been around. A static webpage linking to another host doesn’t know whether that site supports both https and quic, nor does it know if the client supports quic or not. The obvious solution is therefore to use an https uri for the link and expect the client to upgrade the connection to quic if it detects (or has remembered) that the server support it. This approach is simple and effective (particularly for single server sites).
There are some implications that this approach causes however (particularly when there are a large number of delivery nodes that can serve the site):
- Every delivery node must support both https and quic
- Much on the work on minimizing round trips at connection start are irrelevant if the client first has to make a TCP connection
- The delivery node is placed under higher load, having to perform additional TCP session establishment and TLS negotiations on that temporary TCP connection. For repeat visits to a single server site the client can remember the QUIC support and potentially optimize the TCP connection away, for a multi-server site the client may get a different delivery node every time and so have to do the capability
2. Server has knowledge of client capabilities and wishes to direct the client to another delivery node for which it knows the capabilities
When either 30x or Resource based request routing is being performed the server can know if the client is quic capable (e.g. it is communicating over quic already) and also know if the target delivery node is quic capable. In fact for high throughput I would expect that it is desirable to have delivery nodes which are specialized to only do quic and not be dual stack - the two protocols require different code to access the different networking apis or may be able to take advantage of specific hardware acceleration etc.
With a 30x response there may be the option of returning an alt-svc header and hoping the client immediately switches to quic, however with resource based direction then the mechanism would need to work just through what can be learnt via the URL.
You’ve outlined four potential ways that given just a URL the ability for quic to be used directly could be achieved:
1. Rely on alt-svc and the assumption all delivery nodes are dual stack. This limits flexibility and optimizations that could be done with a node only delivering quic packets.
2. Alt-svc in DNS. This seems a technically viable approach although I can’t judge what complexities there are in introducing this and allowing applications to be able to access the information. A human reading the URL also can’t tell what protocol it is for. For a browser same-origin policies may not be tripped when the client switches between protocols (which is probably a good thing)
3. Adding an indicator into the port number (:q443). As others have mentioned this is almost certain to break existing URL parsers and APIs which typically use an integer datatype.
4. Define httpq as an explicit protocol. This is clear what protocol the urls are for, although may have issues around same-origin semantics and may sometimes cause issues if the scheme isn’t defined on the os/programming language.
It may be one solution doesn’t fit all the usecases and the flexibility of multiple of them is what we end up needing:
1. may be suitable (and possibly even best) when used with web browsers/human shared links and where the browser is expected to use https initially and rely on alt-svc
2. could be used to inform clients that quic is supported and to immediately use quic.
4. could be used for m2m interactions (manifest files, request routing, api responses etc) when the capabilities of the client can be assumed/mandated and explicit control over the protocol is desired.
Thomas
From: Mike Bishop [mailto:notifications@github.com]
Sent: 28 March 2017 06:54
To: quicwg/base-drafts <base-drafts@noreply.github.com>
Cc: Subscribed <subscribed@noreply.github.com>
Subject: Re: [quicwg/base-drafts] HTTP/QUIC without Alt-Svc? (#253)
We've talked about four routes here:
1. Long live TCP. All HTTP sites will be dual-stack, and the authoritative endpoint will be TCP. Clients MUST have a way to find the secure delegation from that TCP endpoint to QUIC, though we might define alternatives to Alt-Svc headers which could be done without TCP. (Alt-Svc in DNS?)
2. Update RFC3986. RFC 3986 explicitly states that "The type of port designated by the port number (e.g., TCP, UDP, SCTP) is defined by the URI scheme." We could update the URI to contain a protocol designator, whose default is defined by the URI scheme. As it would be omitted in all existing URIs, their interpretation remains unchanged. Then https://www.example.com:q443/ refers to QUIC on UDP 443.
3. Define a new scheme. See pull request.
4. What's it matter? Assume that HTTP/QUIC on port 443 is likely equivalent to HTTP/TCP on port 443 if the cert is valid and call it good.
We explicitly do not consider the same host on different ports equivalent authorities, even if they happen to be listening on both ports with the same cert. Why is TCP 443 vs. UDP 443 any different from TCP 443 vs. TCP 444? (4) seems like a security issue waiting to happen.
I'm going to reverse myself and disagree that (3) is undeployable on the web in the near-term. App-to-app handoff on many (most?) platforms now uses custom URI schemes. Apps that encounter unknown URI schemes ask the OS; the OS is able to invoke appropriately-registered apps or tell the user they need to get a capable app. E.g. launching "nonsense://" produces this on Win10:
[image]<https://cloud.githubusercontent.com/assets/4273797/24390519/18b217aa-134f-11e7-8c13-22a24805081b.png>
Some cursory testing shows that browsers block navigations to URI schemes that don't have an OS-registered handler. But if you have two browsers, one QUIC-capable and one not, when you click an httpq:// link in the non-QUIC browser the OS will launch the QUIC-capable browser for you and you proceed on your merry way. This seems almost exactly what we'd want to have happen.
(2) probably is undeployable, because legacy apps will attempt to parse the URI and declare it invalid. They're semi-used to seeing unknown schemes (xboxliveapp-1297287741://, anyone?), but changes that break the parsers would be seriously painful.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#253 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AYUiYv1WvV7FWHoVg1SHdKHOUYV5Ekbfks5rqKCCgaJpZM4LzLA5>.
|
|
FYI, I just opened httpwg/http-core#194 |
|
Discussed in Tokyo; this needs to be resolved by the HTTP WG, and we need to incorporate their resolution into the doc. |
|
Discussed in London; still waiting for text from httpwg/http-core#194. |
|
3 Jun
on 194
so are we waiting for something else now? |
|
httpwg/http-core#194 was split into separate issues; the piece we need is now httpwg/http-core#237. |
|
Discussed in ZRH. Waiting for HTTP changes to materialize. |
|
Could you please describe in a couple of sentences what you decided in ZRH? Just for people who are curious what is happening with these new promising web standards. IMHO, the best possible solution is to introduce HTTP-specific DNS record which will describe available HTTP versions as well as the preferred one. It even could be used to make an actual HTTP/3 connection when a user opens an |
|
Short version: The http-core draft is contemplating changes that describe the authoritative endpoint as being any endpoint that has the right certificate. Alt-Svc and other mechanisms can give you additional ways to discover an authoritative endpoint to send requests to. As to a DNS record, please see the SVCB and HTTPS records (naming still TBD) which have been adopted by DNSop, which support exposing whether an endpoint supports TLS/TCP or QUIC. |
|
http-core PR is merged |
|
As a user, I want to have a white list like HSTS. |
|
The HTTP/3 specification is complete and is awaiting publication by the RFC editor. Feature requests can be sent to the HTTP WG who we be resuming ownership and maintenance of HTTP/3. |

While HTTP/QUIC doesn't formally require a client to implement Alt-Svc, there's no discovery mechanism other than Alt-Svc provided, so you're not going to get very far without it. That makes a full HTTP(/TLS)/TCP stack mandatory simply to open the connection in the first place. For various reasons (e.g. an embedded device in a controlled environment), a client might want to dispense with TCP altogether when it knows that the endpoint will support QUIC. It might also be useful in testing to be able to directly reference a QUIC endpoint.
Should we mint new scheme(s) that allows direct reference to a resource served exclusively over HTTP/QUIC?
(Note, in HTTP/2 the answer was "no," because HTTP/2 could be negotiated using the same TCP connection. QUIC doesn't have that luxury.)
The text was updated successfully, but these errors were encountered: