-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refresh Tokens #18549
Comments
cc @erictune |
Here's a more concrete proposal: OIDC Roles <--> Kubernetes Components MappingUser Agent: Obtaining a Refresh TokenA new The After the user has successfully authenticated, they are redirected back to the The AuthenticatingAuthentication happens in the usual way: the token stored in The only difference is that Refreshing the ID TokenIf an ID Token is expired,
The
Note about browsers and the environments they (don't) live in.It's recognized people are often running Summary of Work To Be DoneGiven all the above, the high level things that would need to be done to support this proposal are:
|
I'm hesitant to introduce a
We've gone through a lot of iterations in OpenShift for various auth methods, and have settled on an approach that we're pretty happy with. The highlights are:
There's the start of a proposal to upstream parts of that at #17440 and doc around OpenShift auth integrations I also am not sure how I would want to bake auth endpoints directly into the API server. Even the authn bits we'd like to upstream would probably sit beside the API server... |
Some additional comments:
This is really non-trivial if we want to do this automatically. I don't want to see the browser making unsecured http calls to localhost to transfer this info...
I don't think I'd want to make tokens inspectable. I'd rather have behavior that could be triggered by a token being rejected (like using a refresh_token, or logging in again with a kerberos ticket or client cert, etc).
Doesn't that circumvent the intent of requiring client_secret on token refresh requests? That is in the spec for a reason... |
For the OIDC flow, the token that's stored is the JWT, which is inspectable (but immutable because it's signed) See #10957 |
For the server, sure, but I don't want the client inspecting its own bearer token |
Why not? Is the idea that kubectl should be completely agnostic to what kind of token it has, and what kind of authentication backend is being used? |
By default, yes. I would want mechanisms for obtaining API tokens (stored username/password, refresh token, client cert, kerberos ticket, etc) to be separate from the API token itself. If kubectl has an available mechanism for obtaining an API token, I would like that to be able to be triggered by an authentication error response from the server... not by kubectl trying to inspect a bearer token as a JWT and determining it is about to expire. By reacting to the server rejecting the token, we cover cases that would be missed otherwise (server/client clock skew, invalidated but unexpired token, etc) |
I think in this case, when the api-server starts, it needs to be given both the client-id and client-secret. |
I wasn't so sure about this one myself. This proposal could all work without this but, this is just a convenience thing. That being said we could make
Very cool - we share a lot of the same goals and ideas in Dex - esp w/r/t an aggregation of authentication mechanisms tying back to a single user's identity.
These auth endpoints would only exist if someone were using the OIDC authn plugin. What I am striving to achieve here is an authentication solution that works with any compliant OIDC provider out-of-the-box, without having to setup a bunch of other pods. If these endpoints were hosted elsewhere, how would they get spun up? Would we require that Kubernetes deployers manually run and configure these binaries? |
Yes exactly |
Ok. That makes sense I think.
Well, I wouldn't want kubectl to try and parse all bearer tokens as JWTs - only in certain conditions, namely in the presence of an accompanying oidc-refresh-token in the config, and/or perhaps a flag or something. I would of course want the kubectl to also refresh in response to certain authentication errors from apiserver - I was just trying to avoid the extra trip when possible by checking preemptively. I don't think it's strictly necessary though. |
From my understanding of the proposal, the api-server would get the ID token and refresh token as its role is the oidc client here. So it will send the tokens back to kubectl to store them in kubeconfig. One option I can think of is to let api-server accept a cert file that enables the api server to certificate the kubectl during TLS handshake. |
We could also just have kubectl only hold the refresh token and no auth token. kubectl passes the refresh token along to the API server, which uses that token to request an JWT from the identity provider. Add some caching to prevent excessive round trips. In this scenario kubectl wouldn't need to update its config. |
I don't think that works… refresh tokens don't carry the same signature and introspection guarantees as ID tokens… how would the API server validate the refresh token? I also don't want to allow refresh tokens to be used to obtain access tokens from third party OIDC providers without possession of the client_secret. That's what I meant by circumventing the intent of the spec, which requires access to the client_secret to make use of a refresh token. |
The API server wouldn't introspect the refresh token. It would only use it to make a refresh request to acquire a JWT from the identity provider. The API server would then verify and inspect the JWT, not the refresh token, to authenticate kubectl.
For what I'm suggesting, the API server holds a client_id and client_secret and kubectl never needs to see the auth token. |
I don't think it is appropriate to treat a refresh token like an access_token or id_token |
Fair enough. My motivation for thinking of it this way is that we're talking about something stronger than the existing access_token authn plugin. Specifically:
But, I don't want to get too off topic from @bobbyrullo's original proposal. |
I see what you're saying @liggitt, though the attack vector is not clear to me. So then w/r/t refresh_tokens the alternatives I see are:
|
@bobbyrullo starts by laying out a basic premise: refreshing tokens in kubelet requires a client ID and a secret, and that therefore it should not be done in kubectl. This premise assumes using That implies using the code flow. But OpenID Connect also supports Implicit flow, which does not require using a Client ID and secret, IIUC. Can Implicit flow be used instead? It seems like the main reason not to use Implicit Flow is concern about the tokens being delivered to the browser, which might be compromised. If the redirect URI is set to localhost (on a port kubectl is listening on), which is explicitly allowed for native clients then the only risk is that kubectl is compromised. Kubectl seems less likely than a browser to be compromised, since it is much less complex and generic. Thoughts? |
Hmm. Looks like Refresh Tokens can't be used with the implicit flow. |
It is a little confusing, but as I read this section of the spec, you can refresh Access Tokens but you cannot count on refreshing ID Tokens. But this proposal talks about getting a new ID Token. Not sure yet if that makes a significant difference. |
@cjcullen FYI |
It seems like the purpose of the This seems like a different intent than how this proposal is using it. I'm just guessing here, but it seems like once the RP has seen the ID token, and checked the users email, then after that the OP does not need to be involved. For browser-based interaction, the RP would typically use a cookie to identify the user for the rest of the session between RP and User. |
@liggitt why are you opposed to "unsecured http calls to localhost to transfer this info". The OpenID Connect spec specifically permits it |
That is only for the implicit flow, and only for native clients... I would be concerned about using it as part of a code flow with a browser client.
Interesting, not sure how widely that is supported by servers |
That's not my main concern with the implicit flow. My problem with this use of the implicit flow (or any situation where you have 1 or more client per end-user)is that it will require a bunch more state in the API Server (or somewhere else in Kubernetes). The API Server needs to validate an ID token is not just for a particular user but for a particular client as well. In the case where there is one client for all users (the API Server) then it's easy - just start up API Server with a flag telling it what client it is and it will check claims accordingly. But in the world where there exists a diversity of clients, we need to validate that the ID token was intended to be used to access Kubernetes - otherwise someone authenticating against Google (chosen as an example because it is probably the most popular deployed OIDC IdP) could have their ID token used to access the Kubernetes cluster some other client was compromised (or malicious). So that means Kubernetes would have to be in the business of keeping track of what clients belong to what users, which brings along a whole set of APIs for managing that state, registering clients to users, disabling clients etc etc etc. |
I still disagree with the API server proxying authorization code / refresh token requests and adding in its confidential client_secret. That goes against the OAuth spec, and I think we should resolve that before moving forward on implementation. |
For people following along the discussion has moved to the PR: #23566 |
@erictune Based on the sig-auth call, will you be able to ask your contact about whether its reasonable for a server to attach its own client secret to a refresh token submitted to it so that the token submitter can get back an access token without having the client secret required to use his refresh token? This seems to do an end-run around the two pieces of knowledge that the oauth spec lays out as required for refresh tokens (refresh token and client secret). If he says its questionable, it may be safer to create a separate token issuer that validates against an OIDC server, but doesn't attach its client secret to any requesting refresh token. |
@liggitt , @deads2k : would your respective concerns be alleviated if I did the following the refresh token PR:
|
The way I look at it the core issue is: Is it ok let an end user (eg. Resource Owner) have a refresh token? |
I spoke to someone at Google who seems to know a fair bit about OIDC. |
@erictune Are there more details on why that must be the case? |
Thanks for the update, @erictune. Was it understood that in this case the K8s API Server is the relying-party/client, not the CLI? |
I'll check on that. |
AFAIK, K8S API server would be the relying-party (
This would happen on |
Also, using signed JTW assertions the |
Response from my: OIDC contact: I asked: Can the CLI and the K8s Server each have their own separate client IDs? The server needs something to use as the client_id for the "aud" claim. The CLI need to do the refreshing directly with the Id provider, as you said earlier. He replied: Yes they can both have client ids and this is new. This is the thing that you and I discussed 2 quarters ago. I has been implemented (on Google). I tested it and it works. It typically goes as follows:
Here's an example of a decoded Id token from Google, with appropriate fields zeroed.
|
So, we need to have a client-id for kubectl (the Dex needs to make sure it supports the azp claim. |
He explained that it does not matter that the kubectl client id is not really a secret. He said "The secret is not really a secret but that does not matter too much since the security actually comes from the short lived authorization code that needs a user consent.". |
@erictune Thanks for the detailed responses and suggestions! What's interesting is that we are in the process of supporting the azp claim and the cross-client authentication stuff (see dexidp/dex#371, and the google implementation that inspired it here )but I didn't think to use it for this particular case (our plan was to use it so that our UI Console could make API requests with the users identity) What makes the Google Cross client stuff secure (i.e. mitigates attacks with spurious, malicious clients is that all the clients are known to each other because they are in the same Google "Project"; we are implementing something similar by explicitly requiring that Clients register what other clients they will allow to issue tokens on their behalf. In our implementation as well as Google's (as far as I understand) this registering of clients is something that is done with Client credentials, not user credentials. So the trick is coming up with a way of creating clients initiated by a user interaction, and somehow, with the API server's and the user's consent, registering that client as one that can mint tokens for the API Server. This part it not part of any standard; in the google world, this is done through the developer using the dev console to make the clients part of the same project. For us, maybe custom APIs. But this means that this will have to happen out of band of what is in upstream Kubernetes, which is unfortunate, because it means that you can't just plug-in any OIDC IdP for K8s, you have to to this out of band setup. Anyhow, there's probably something we can work with here, me and the rest of the team will let this digest. Thanks again for all your thoughtful comments. |
Is he implying that all CLIs for all users should get the same client? That makes me nervous because of the stuff I mentioned here. The attack is: someone malicious uses the not-so-secret client credentials to make a website with it that is sufficiently attractive (some game or something) that someone with will authenticate with it. Now they can use the same credentials that the user thought they were handing over just to play Angry Birds or something to control their cluster. |
The kubectl client secret is not secret. The K8s API server client secret is per-cluster and is secret. |
The RP has to reject a token for the wrong |
@erictune Can you contact explain why this solution is unsafe or bad? I am OK changing direction with the azp stuff but I want to internalize what was wrong with the original proposal. |
We talked about this in sig-auth meeting, and we finally have something we all pretty much agree on:
Notice that there's no mention of azp, cross-client stuff - this is intentional; not all IdPs support that and I don't think it's a good idea or necessary to enforce it. But here's the recommended deployment:
@liggitt @erictune - We're good with this approach yes? Note that this handles passing along the token and refreshing it - not initial refresh token population. I'm going to do |
That approach seems good to me.
Would this also check for an existing but expired ID token? What about an ID token the client still thinks is valid, but the server rejects?
When you get to that part, should probably include redirect_uri (some providers might support |
I'd sort of expect the OIDC provider to populate the |
Thanks for getting back, @liggitt
Yes on both counts.
Absolutely. |
Under the new AuthProvider framework, Providers don't have visibility to the entire config file; they can only see (and write to) its own stanza. @cjcullen and @deads2k both explicitly called that out in #23066 (eg, this comment) so I wouldn't want to change that without those folks buying in. |
A little ways back, support was added to the APIServer for authenticating with JWTs obtained from OpenID Identity Providers ( #10957 ). However, these tokens tend to be short-lived, so we'd like to add support for refresh tokens.
The obvious place for this at first glance is
kubectl
, but that presents a number of problems: a refresh token request requires a client ID and secret; surely we don't want to distribute the API Server client secret to every user who want to use the command line?So should the APIServer instead be able to consume Refresh tokens? If that's the case, it would seem that it would need to have an endpoint where someone can go to obtain one in the first place (eg., they navigate to some URL, do the OAuth2 dance with their OIDC IDP, end up back at the APIServer on a page that displays the refresh token, which they can then embed in their kubectl config file. Does this approach make more sense?
Thanks!
Bobby
cc: @ericchiang, @bcwaldon, @philips @yifan-gu
The text was updated successfully, but these errors were encountered: