-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify multiple authorization
header behaviour or concatenation method
#180
Comments
Multiple fields with the same name will be merged by message processors at any point in the chain of recipients. It is very rare for them not to be merged before application code can see them. This is a fact regardless of how we might change the spec. Merged Authorization fields can be parsed: the "scheme SP (anything other than comma)" is the indicator. Nevertheless, this is rarely used in practice because each implementation typically only expects one scheme to be used even if multiple are offered. But it will work if the entire stack agrees. |
AFAIU, none of these statements are true. |
@royfielding what I'm finding is that CDNs, LBs, and proxies all do it differently. Many simply reject the request because the Authorization header is not strictly wellformed because of the comma parsing. Others strip after the parsing is incomplete. @reschke given the following:
the merge is therefore invalid because of the space on the second auth-scheme and auth-param is invalid:
removing the auth-schemes to make it compliant results in a likewise ambiguous header because of the duplicate
finally, even if the ambiguity can be overcome this is likewise invalid because it is either toke68 or auth-param. this is invalid because both token68 can be suffixed with
|
This
is malformed already because the parameter value is missing. Recombination into a single field value just goes from "broken" to "broken". Note:
That is incorrect: auth-param requires a value, so it can't end in "=". |
@colinbendell does this mean we have another instance of a header field containing commas and supposed to exist in multiple occurrences ? We previously had only set-cookie doing this! Adding exceptions to generic parsers is a real pain :-( Are these schemes using commas already deployed or is it still possible to change the delimiter or to require quotes around the field value ? |
Using commas is not a problem, as long as auth-param syntax is followed. |
FWIW, I confused
Furthermore: as I agree it's unfortunate that the type of brokenness is not exactly the same, but that's how things are, and I don't think we can do anything about it. Also, coming back to:
That's not how HTTP authentication works. The server can send multiple challenges, but the client needs to pick one of these. |
No. From the ABNF: These are valid Authorization values:
The latter is example of @reschke, I think you are confusing the legacy |
Not sure I follow what you mean by "it doesn't support list syntax". [aside: s/Authenticate/Authorization/g ] |
This is worth discussing. While http authorization is intended for humans to validate access to the destination origin, the definition of origin has become opaque. Is the origin the service-worker in the browser? is it the network stack in the OS (in examples like NSURLSession where the client application hands off the HTTP work to the OS)? is it the first surrogate-proxy or cdn that the client's device connects to? Is it the surrogate-proxy shield that this cdn subsequently relays to? Is it the application load balancer that the cdn connects to? Or the front end web service that the load balancer connects to? HTTP proxies are abundant. It is not just the human at the end that need authorizing, it is each layer in the transaction. I would submit that, broadly, the objective is to ensure that a HTTP receiver can authorize the HTTP sender. When there are many senders and receivers in a chain of a request, it is not unreasonable to require each layer to add their own authorization. This is to safe guard tampering and unauthorized manipulation. It is also to safe guard infrastructure to ensure that security infrastructure is not side stepped. An origin application should only accept requests that are from an authorized human, and gone through authorized http proxies. MITM attacks are a very real and nebulous topic. There are examples of SW hijacking payload requests from the browser, examples of anti virus applications manipulating TLS requests, ISPs tampering requests, un-authorized CDNs being used, etc. [aside: TLS doesn't solve this, mitm is a real thing and I'm tracking 3-7% of TLS traffic with evidence that it has been mitm'ed.] Here is a very real example of how many proxies are commonly seen:
Each of those proxies has full control of the request and response stream. A good Origin application is aware of its infrastructure and can expect demonstrated Authorization from each leg of the communication, including the human's bearer. If any are missing, the request should be rejected because the safe-guards provided by caching infrastructure are being circumvented. [Layer 5 TLS mutual authentication only allows you to verify the last leg of the chain and verify that the sender is valid.] In short, what I want to resolve is the inconsistencies with |
@wtarreau, Short answer: It's a mess. The implementations in the wild are all over the place since this is an old RFCs dating back to the 90s. I see examples of |
No, you just did :-) I said "auth-param" does not allow trailing "=", and then you pointed to token68 (which indeed does). So there's no ambiguity. |
The header field is not defined to use list syntax, see https://greenbytes.de/tech/webdav/rfc7235.html#header.authorization and https://greenbytes.de/tech/webdav/rfc7235.html#challenge.and.response:
|
FWIW, I would totally support activities to extend/fix/repair HTTP auth. So far this hasn't happened because it seemed that UA vendors aren't interested. If that would change, that would be great. But right now I don't see an issue with the actual spec. |
OK great, thanks Julian for checking! |
OK. I think we're both confusing ourselves. I thought you had said: Aggravating the situation is that there is language in rfc7235 that refers to "Authorization fields" (plural) which suggests that it should be permissible to have multiple
I'd like to make this happen. I'm increasingly concerned about the chain of trust and I'm currently building hacks around the spec that only work in the area for which I have control. What would the next steps be here? |
Colin, RFC6750 doesn't allow the use of Are you seeing traffic in the real world that does this at scale, or are you trying to support a new use case? Regarding authorising other parties -- |
Thanks for that clarification. I wasn't aware that the
I am, though we are working around it. The first use case I've encountered this is when we have api's using the Authorization header as well as end-user's Authorization header. user -> application -> third-party-api -> origin In this case there is an end-user being authenticated to our origin, but the application also utilizes apis for a third-party api that uses the authorization headers. The nuance here is the third party api acts as a pass through to our origin, relaying the authorization header. (the easy work around was to a) not use the third party broker or b) switch to While this example is somewhat limited, I am also pursuing this use case for our own work because api authorizations and surrogate-proxy authorizations are a critical, unsecured, attack surface.
Perhaps, for simplicity and adoption, it is better to consider a new |
you could argue that today's notion of the 'user-agent' is, itself, ambiguous. is that the human, the javascript in the browser, the sevice worker in the browser, the browser itself, or the os? anecdotally, I once worked for a large satellite-music company many years ago. They had different revenue models based on the chain of access. User's acquired subscriptions, which would change depending on the application (car v. phone) which depended on the hardware (aftermarket v. car manufacturer). as such, hardware, software and users all had to be authorised in the chain of any request to both compensate recording artists, as well as billing the manufacturers or the users appropriately. IIRC, we ultimately ended up building our own http-headers because of the aforementioned limitations and ambiguity of the |
I've flagged this for discussion in Montreal. |
Discussed in Montreal; close as out of scope. |
In rfc7235, the
Authorization
header has a single auth-schema with,
separated auth-params. Yet, in rfc7230 multiple headers should be merged with a,
with few exceptions (Authorization
is not one of them).There are many situations (see below) where multiple Authorization headers are needed. Concatenating the values into the auth-params section has three challenges:
However, there are legacy implications. HTTP Engines like nginx forced single
Authorization
headers and will reject any request with multiple headers - even if proxying. (this requires then clever applications to do pre and post processing of the Authorization header). Surrogate proxies and their ilk will have implications with any changes or clarifications to the standards.Use Cases
There are many situations where having multiple
authorization
headers is useful to form a chain of trust. Generally, the objective is to distinguish between the layers of authorization. For example, a request could be decorated with the user's specific authorization which uses a jwt bearer token68, followed by the application-authorization signature from the ios app, finally the cdn's surrogate proxy key is finally added to the authorization chain using a non-standard auth-scheme:With this, each layer can be responsible for authorizing one or many links in the chain. The cdn could verify the application's authorization signature and ensure the payload is untampered while and the app can be left to authorize the users's token. Or if the cdn's authorization is suspect/untrusted, then the application can evaluate the full chain of trust to ensure that the user is valid, and that any proxy in the middle is also verified (assuming that shared secrets are increasingly available through the passage of the request.
This is increasingly important when multiple applications, service-workers, cdns and saas solutions are involved in the chain of the request. For fear of tampering and mutation, I would prefer to define Authorization headers as immutable and should not be concatenated or mutated by proxies. Multiple Authorization headers should be explicitly allowed as an exception to rfc7230.
The text was updated successfully, but these errors were encountered: