Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refresh Tokens #18549

Closed
bobbyrullo opened this issue Dec 11, 2015 · 71 comments · Fixed by #25270
Closed

Refresh Tokens #18549

bobbyrullo opened this issue Dec 11, 2015 · 71 comments · Fixed by #25270
Assignees
Labels
priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@bobbyrullo
Copy link
Contributor

A little ways back, support was added to the APIServer for authenticating with JWTs obtained from OpenID Identity Providers ( #10957 ). However, these tokens tend to be short-lived, so we'd like to add support for refresh tokens.

The obvious place for this at first glance is kubectl, but that presents a number of problems: a refresh token request requires a client ID and secret; surely we don't want to distribute the API Server client secret to every user who want to use the command line?

So should the APIServer instead be able to consume Refresh tokens? If that's the case, it would seem that it would need to have an endpoint where someone can go to obtain one in the first place (eg., they navigate to some URL, do the OAuth2 dance with their OIDC IDP, end up back at the APIServer on a page that displays the refresh token, which they can then embed in their kubectl config file. Does this approach make more sense?

Thanks!

Bobby
cc: @ericchiang, @bcwaldon, @philips @yifan-gu

@yifan-gu
Copy link
Contributor

cc @erictune

@davidopp davidopp added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Dec 13, 2015
@bobbyrullo
Copy link
Contributor Author

Here's a more concrete proposal:

OIDC Roles <--> Kubernetes Components Mapping

User Agent: kubectl
Relying Party (client): apiserver
Identity Provider: Any OpenID Connect IdP - Google, Dex, etc.

Obtaining a Refresh Token

A new login command is added to kubectl; when a user runs kubectl login, the user's browser is launched and navigated to a new endpoint on the apiserver - for the purposes of this document, call it "/oidc-login".

The /oidc-login endpoint will initiate an OpenID Connect auth flow with the IdP it has been configured with by redirecting the user to the appropriate authentication endpoint on the IdP; the redirect URL will have the "offline_access" scope added to it so that a refresh token can be obtained.

After the user has successfully authenticated, they are redirected back to the /oidc-login endpoint with the access token, which the apiserver can use to exchange with the IdP for a refresh token and an ID Token (i.e., a JWT).

The apiserver responds finally by writing the ID and Refresh token in the response of the HTTP body, and kubectl consumes this (TODO: how does kubectl get this from the browser it opened?) and stores the ID Token in the user/token field of the kubeconfig, and the refresh token in a new field (eg. "user/oidc_refresh_token")

Authenticating

Authentication happens in the usual way: the token stored in user/token is passed along on requests to the APIServer, which does the usual OIDC validation.

The only difference is that kubectl first inspects the OIDC token to see if it has expired (or will expire very soon) and if so, initiates a refresh flow (see below) to obtain a new ID token.

Refreshing the ID Token

If an ID Token is expired, kubectl will attempt to refresh it before making the request to the apiserver.

kubectl makes a request to the apiserver on a new /oidc-refresh endpoint (or we can somehow repurpose the /oidc-login), passing along the refresh token from the kubeconfig.

The apiserver in turn makes a refresh request to the IdP, and upon successful exchange of tokens, passes the new ID token back to kubectl.

kubectl then makes the request to the apiserver as per usual.

Note about browsers and the environments they (don't) live in.

It's recognized people are often running kubectl on remote machines where they aren't necessarily able to launch a browser; for those cases users should be able to navigate to the apiserver's /oidc-login endpoint directly from their own machines and copy & paste the refresh token into a terminal editing their kubeconfig

Summary of Work To Be Done

Given all the above, the high level things that would need to be done to support this proposal are:

  1. Create /oidc-login endpoint on OIDC server which
    1. initiates OIDC Auth flow
    2. responds to callback from OIDC Auth Flow, does token exchange and publishes results to page.
  2. Add refresh_token to kubeconfig
  3. Create kubectl login command
  4. Make kubectl "OIDC Aware", meaning that it checks for token expiration before making requests, and knows how to refresh them.

@liggitt
Copy link
Member

liggitt commented Dec 14, 2015

I'm hesitant to introduce a kubectl login command that only deals with one auth method. There are many authentication methods we'll want to allow (OIDC, x509 client cert, auth proxy, kerberos, etc). I would expect kubectl login to:

  • accept username/password (or prompt) for basic-auth logins
  • auto-login for SPNEGO/kerberos and client-cert logins
  • redirect to a web flow for OIDC-based logins

We've gone through a lot of iterations in OpenShift for various auth methods, and have settled on an approach that we're pretty happy with. The highlights are:

  1. Integrated OAuth server that funnels various auth integrations into an OAuth token flow:
    • authenticates identities via many different methods (auth proxy, basic auth, LDAP, OIDC)
      • can surface web-based login flow (or redirect to third party web-based login flow) for browser clients
      • can send WWW-Authenticate challenges to CLI/API clients
    • maps authenticated identity to user (allows different auth methods to identity the same user)
    • mints API token for user
  2. API access uses token auth

There's the start of a proposal to upstream parts of that at #17440 and doc around OpenShift auth integrations

I also am not sure how I would want to bake auth endpoints directly into the API server. Even the authn bits we'd like to upstream would probably sit beside the API server...

@liggitt
Copy link
Member

liggitt commented Dec 14, 2015

Some additional comments:

The apiserver responds finally by writing the ID and Refresh token in the response of the HTTP body, and kubectl consumes this (TODO: how does kubectl get this from the browser it opened?)

This is really non-trivial if we want to do this automatically. I don't want to see the browser making unsecured http calls to localhost to transfer this info...

The only difference is that kubectl first inspects the OIDC token to see if it has expired (or will expire very soon) and if so, initiates a refresh flow (see below) to obtain a new ID token.

I don't think I'd want to make tokens inspectable. I'd rather have behavior that could be triggered by a token being rejected (like using a refresh_token, or logging in again with a kerberos ticket or client cert, etc).

kubectl makes a request to the apiserver on a new /oidc-refresh endpoint (or we can somehow repurpose the /oidc-login), passing along the refresh token from the kubeconfig.
The apiserver in turn makes a refresh request to the IdP, and upon successful exchange of tokens, passes the new ID token back to kubectl.

Doesn't that circumvent the intent of requiring client_secret on token refresh requests? That is in the spec for a reason...

@bobbyrullo
Copy link
Contributor Author

I don't think I'd want to make tokens inspectable. I'd rather have behavior that could be triggered by a token being rejected (like using a refresh_token, or logging in again with a kerberos ticket or client cert, etc).

For the OIDC flow, the token that's stored is the JWT, which is inspectable (but immutable because it's signed) See #10957

@liggitt
Copy link
Member

liggitt commented Dec 14, 2015

For the OIDC flow, the token that's stored is the JWT, which is inspectable (but immutable because it's signed) See #10957

For the server, sure, but I don't want the client inspecting its own bearer token

@bobbyrullo
Copy link
Contributor Author

For the server, sure, but I don't want the client inspecting its own bearer token

Why not? Is the idea that kubectl should be completely agnostic to what kind of token it has, and what kind of authentication backend is being used?

@liggitt
Copy link
Member

liggitt commented Dec 14, 2015

Why not? Is the idea that kubectl should be completely agnostic to what kind of token it has, and what kind of authentication backend is being used?

By default, yes. I would want mechanisms for obtaining API tokens (stored username/password, refresh token, client cert, kerberos ticket, etc) to be separate from the API token itself. If kubectl has an available mechanism for obtaining an API token, I would like that to be able to be triggered by an authentication error response from the server... not by kubectl trying to inspect a bearer token as a JWT and determining it is about to expire. By reacting to the server rejecting the token, we cover cases that would be missed otherwise (server/client clock skew, invalidated but unexpired token, etc)

@yifan-gu
Copy link
Contributor

Doesn't that circumvent the intent of requiring client_secret on token refresh requests? That is in the spec for a reason...

I think in this case, when the api-server starts, it needs to be given both the client-id and client-secret.

@bobbyrullo
Copy link
Contributor Author

I'm hesitant to introduce a kubectl login command that only deals with one auth method. There are many authentication methods we'll want to allow (OIDC, x509 client cert, auth proxy, kerberos, etc).

I wasn't so sure about this one myself. This proposal could all work without this but, this is just a convenience thing. That being said we could make kubectl login work more generally (with all the different implementations you mention being kept in mind) and have OIDC just be the very first implementation.

and have settled on an approach that we're pretty happy with. The highlights are:

Very cool - we share a lot of the same goals and ideas in Dex - esp w/r/t an aggregation of authentication mechanisms tying back to a single user's identity.

I also am not sure how I would want to bake auth endpoints directly into the API server. Even the authn bits we'd like to upstream would probably sit beside the API server...

These auth endpoints would only exist if someone were using the OIDC authn plugin.

What I am striving to achieve here is an authentication solution that works with any compliant OIDC provider out-of-the-box, without having to setup a bunch of other pods. If these endpoints were hosted elsewhere, how would they get spun up? Would we require that Kubernetes deployers manually run and configure these binaries?

@bobbyrullo
Copy link
Contributor Author

Doesn't that circumvent the intent of requiring client_secret on token refresh requests? That is in the spec for a reason...

I think in this case, the api-server starts, it needs to be given both the client-id and client-secret

Yes exactly

@bobbyrullo
Copy link
Contributor Author

By default, yes. I would want mechanisms for obtaining API tokens (stored username/password, refresh token, client cert, kerberos ticket, etc) to be separate from the API token itself.

Ok. That makes sense I think.

If kubectl has an available mechanism for obtaining an API token, I would like that to be able to be triggered by an authentication error response from the server... not by kubectl trying to inspect a bearer token as a JWT and determining it is about to expire.

Well, I wouldn't want kubectl to try and parse all bearer tokens as JWTs - only in certain conditions, namely in the presence of an accompanying oidc-refresh-token in the config, and/or perhaps a flag or something.

I would of course want the kubectl to also refresh in response to certain authentication errors from apiserver - I was just trying to avoid the extra trip when possible by checking preemptively. I don't think it's strictly necessary though.

@yifan-gu
Copy link
Contributor

@liggitt

This is really non-trivial if we want to do this automatically. I don't want to see the browser making unsecured http calls to localhost to transfer this info...

From my understanding of the proposal, the api-server would get the ID token and refresh token as its role is the oidc client here. So it will send the tokens back to kubectl to store them in kubeconfig. One option I can think of is to let api-server accept a cert file that enables the api server to certificate the kubectl during TLS handshake.

@ericchiang
Copy link
Contributor

We could also just have kubectl only hold the refresh token and no auth token.

kubectl passes the refresh token along to the API server, which uses that token to request an JWT from the identity provider. Add some caching to prevent excessive round trips.

In this scenario kubectl wouldn't need to update its config.

@liggitt
Copy link
Member

liggitt commented Dec 16, 2015

I don't think that works… refresh tokens don't carry the same signature and introspection guarantees as ID tokens… how would the API server validate the refresh token?

I also don't want to allow refresh tokens to be used to obtain access tokens from third party OIDC providers without possession of the client_secret. That's what I meant by circumventing the intent of the spec, which requires access to the client_secret to make use of a refresh token.

@ericchiang
Copy link
Contributor

I don't think that works… refresh tokens don't carry the same signature and introspection guarantees as ID tokens… how would the API server validate the refresh token?

The API server wouldn't introspect the refresh token. It would only use it to make a refresh request to acquire a JWT from the identity provider. The API server would then verify and inspect the JWT, not the refresh token, to authenticate kubectl.

I also don't want to allow refresh tokens to be used to obtain access tokens from third party OIDC providers without possession of the client_secret. That's what I meant by circumventing the intent of the spec, which requires access to the client_secret to make use of a refresh token.

For what I'm suggesting, the API server holds a client_id and client_secret and kubectl never needs to see the auth token.

@liggitt
Copy link
Member

liggitt commented Dec 16, 2015

I don't think it is appropriate to treat a refresh token like an access_token or id_token

@ericchiang
Copy link
Contributor

I don't think it is appropriate to treat a refresh token like an access_token or id_token

Fair enough.

My motivation for thinking of it this way is that we're talking about something stronger than the existing access_token authn plugin. Specifically:

  1. A refresh token demonstrates that an app has permission to continuously login as a specific user.
  2. As a trade off, has to check-in with the identity provider to ensure it hasn't been revoked.

But, I don't want to get too off topic from @bobbyrullo's original proposal.

@bobbyrullo
Copy link
Contributor Author

I see what you're saying @liggitt, though the attack vector is not clear to me.

So then w/r/t refresh_tokens the alternatives I see are:

  1. Every kubectl user gets their own client id and secret. This seems like a pain to manage and is just as insecure (since all the secret bits are in one place).
  2. APIServer or some other Kubernetes component stores the refresh token. How this flow would work is unclear but presumably kubectl would have to interact with the refresh token holding component to get an ID token...but what credentials would they use? It can't be their JWT token, since that has expired.

@erictune
Copy link
Member

@bobbyrullo starts by laying out a basic premise: refreshing tokens in kubelet requires a client ID and a secret, and that therefore it should not be done in kubectl.

This premise assumes using That implies using the code flow. But OpenID Connect also supports Implicit flow, which does not require using a Client ID and secret, IIUC. Can Implicit flow be used instead?

It seems like the main reason not to use Implicit Flow is concern about the tokens being delivered to the browser, which might be compromised. If the redirect URI is set to localhost (on a port kubectl is listening on), which is explicitly allowed for native clients then the only risk is that kubectl is compromised. Kubectl seems less likely than a browser to be compromised, since it is much less complex and generic.

Thoughts?

@erictune
Copy link
Member

Hmm. Looks like Refresh Tokens can't be used with the implicit flow.

@erictune
Copy link
Member

It is a little confusing, but as I read this section of the spec, you can refresh Access Tokens but you cannot count on refreshing ID Tokens. But this proposal talks about getting a new ID Token. Not sure yet if that makes a significant difference.

@erictune
Copy link
Member

@cjcullen FYI

@erictune
Copy link
Member

It seems like the purpose of the offline_access scope is to allow the RP to continue to use any Access Tokens it is given as part of the login flow long after the original login flow. For example, if I use OIDC to auth to a photo sharing site (the RP), and it also requests permission to see my facebook page (an Access Token), then without the offline_access scope, the RP can only look at it for a short time, but with the offline_access scope, it can continue to see my facebook page indefinitely after (by refreshing the Access Token).

This seems like a different intent than how this proposal is using it.

I'm just guessing here, but it seems like once the RP has seen the ID token, and checked the users email, then after that the OP does not need to be involved. For browser-based interaction, the RP would typically use a cookie to identify the user for the rest of the session between RP and User.

@erictune
Copy link
Member

@liggitt why are you opposed to "unsecured http calls to localhost to transfer this info". The OpenID Connect spec specifically permits it

@erictune
Copy link
Member

@liggitt it looks like RFC 7636 explains the types of attack you were worried about, and offers a solution.

@liggitt
Copy link
Member

liggitt commented Jan 20, 2016

@liggitt why are you opposed to "unsecured http calls to localhost to transfer this info". The OpenID Connect spec specifically permits it

That is only for the implicit flow, and only for native clients... I would be concerned about using it as part of a code flow with a browser client.

@liggitt it looks like RFC 7636 explains the types of attack you were worried about, and offers a solution.

Interesting, not sure how widely that is supported by servers

@bobbyrullo
Copy link
Contributor Author

It seems like the main reason not to use Implicit Flow is concern about the tokens being delivered to the browser, which might be compromised

That's not my main concern with the implicit flow. My problem with this use of the implicit flow (or any situation where you have 1 or more client per end-user)is that it will require a bunch more state in the API Server (or somewhere else in Kubernetes).

The API Server needs to validate an ID token is not just for a particular user but for a particular client as well. In the case where there is one client for all users (the API Server) then it's easy - just start up API Server with a flag telling it what client it is and it will check claims accordingly.

But in the world where there exists a diversity of clients, we need to validate that the ID token was intended to be used to access Kubernetes - otherwise someone authenticating against Google (chosen as an example because it is probably the most popular deployed OIDC IdP) could have their ID token used to access the Kubernetes cluster some other client was compromised (or malicious).

So that means Kubernetes would have to be in the business of keeping track of what clients belong to what users, which brings along a whole set of APIs for managing that state, registering clients to users, disabling clients etc etc etc.

@liggitt
Copy link
Member

liggitt commented Apr 8, 2016

I still disagree with the API server proxying authorization code / refresh token requests and adding in its confidential client_secret. That goes against the OAuth spec, and I think we should resolve that before moving forward on implementation.

@philips
Copy link
Contributor

philips commented Apr 18, 2016

For people following along the discussion has moved to the PR: #23566

@deads2k
Copy link
Contributor

deads2k commented Apr 20, 2016

Okay, after talking to an OIDC expert at my company, I feel much better about this plan.

@erictune Based on the sig-auth call, will you be able to ask your contact about whether its reasonable for a server to attach its own client secret to a refresh token submitted to it so that the token submitter can get back an access token without having the client secret required to use his refresh token? This seems to do an end-run around the two pieces of knowledge that the oauth spec lays out as required for refresh tokens (refresh token and client secret).

If he says its questionable, it may be safer to create a separate token issuer that validates against an OIDC server, but doesn't attach its client secret to any requesting refresh token.

@bobbyrullo
Copy link
Contributor Author

@liggitt , @deads2k : would your respective concerns be alleviated if I did the following the refresh token PR:

  • put refresh tokens behind a flag (eg. --oidc-enable-refresh-tokens)
  • in whatever documentation comes out of this, make clear the security implications. Something like:

By turning on the refresh tokens feature, you are giving users of your cluster the ability to have potentially infinitely long-lived credentials which could be used without any other secrets to gain access to the cluster if compromised. If you use this feature, be sure to use an IdP which has the ability to revoke refresh tokens, and instruct your users to protect their refresh tokens as if they were passwords (which, for all practical purposes they are) and what to do in case of a compromise.

@bobbyrullo
Copy link
Contributor Author

...whether its reasonable for a server to attach its own client secret to a refresh token submitted to it so that the token submitter can get back an access token...

The way I look at it the core issue is:

Is it ok let an end user (eg. Resource Owner) have a refresh token?

@erictune
Copy link
Member

I spoke to someone at Google who seems to know a fair bit about OIDC.
I described the issue to him. His response was that the refresh token should only go from CLI to Identity provider, never to the K8S server. So, we should either do it differently, or agree upon and appeal to some higher authority (no ideas on that).

@philips
Copy link
Contributor

philips commented Apr 26, 2016

@erictune Are there more details on why that must be the case?

@bobbyrullo
Copy link
Contributor Author

Thanks for the update, @erictune. Was it understood that in this case the K8s API Server is the relying-party/client, not the CLI?

@erictune
Copy link
Member

I'll check on that.

@pires
Copy link
Contributor

pires commented Apr 27, 2016

AFAIK, K8S API server would be the relying-party (RP) and therefore, on access token invalidation/expiration it would be the one responsible for either:

  • Redirect client to OpenID Provider (OP) for re-authorization, or
  • Having a code for a certain user, request new authorization_code

This would happen on code-flow.

@pires
Copy link
Contributor

pires commented Apr 27, 2016

Also, using signed JTW assertions the client secret shouldn't be needed.

@erictune
Copy link
Member

Response from my: OIDC contact:

I asked: Can the CLI and the K8s Server each have their own separate client IDs? The server needs something to use as the client_id for the "aud" claim. The CLI need to do the refreshing directly with the Id provider, as you said earlier.

He replied:

Yes they can both have client ids and this is new. This is the thing that you and I discussed 2 quarters ago. I has been implemented (on Google). I tested it and it works.

It typically goes as follows:

  1. The CLI asks for an authorization code by asking launching a browser / asking the user to copy/paste a URL. This URL contains the CLI client ID and a scope (email in this case). The URL points to a dialog box that asks user confirm that the CLI can view its email address and know who it is on google:
    Here's an example: https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&prompt=select_account&response_type=code&client_id=289442766429-e2r34a3m97fg6evjbhedcb0en0g8qubo.apps.googleusercontent.com&access_type=offline&scope=email
  2. The user copy/pastes the authorization code in the CLI. Now the CLI is able to request both and ID token and a refresh token to the IDP (google here). The parameters of this request are:
    • CLI client_id
    • CLI client_secret
    • audience which is the K8 API service client_id
    • The IDP comes back with an ID token and a refresh token. The ID token the client id of the K8 API server as its audience and another filed "azp" (Authorized Presenter) that encodes the CLI client ID.
  3. When the ID token expires, the CLI can request a new one to the IDP thanks to its refresh token without going through the authorization/user consent screen again.

Here's an example of a decoded Id token from Google, with appropriate fields zeroed.

{                                                                                                                                                                                                                                                                                                                                                  
 "aud": "11111111111-382n23n28fj32ne8n2n2r.apps.googleusercontent.com",       // K8 API server client ID                                                                                                                                                                                                                                                               
 "iss": "accounts.google.com",                                                                                                                                                                                                                                                                                                                    
 "email_verified": true,                                                                                                                                                                                                                                                                                                                          
 "at_hash": "Ws3ergSs2sddcDW1xs",                                                                                                                                                                                                                                                                                                             
 "exp": 1448866767,                                                                                                                                                                                                                                                                                                                               
 "azp": "22222222222-23kn423kjn23kj423lkj2323fcsdf.apps.googleusercontent.com",  // CLI client ID.                                                                                                                                                                                                                                                             
 "iat": 1448863167,                                                                                                                                                                                                                                                                                                                               
 "email": "user@example.com",                                                                                                                                                                                                                                                                                                                     
 "hd": "example.com",                                                                                                                                                                                                                                                                                                                               
 "sub": "13423049823049823049"                                                                                                                                                                                                                                                                                                                   
}

@erictune
Copy link
Member

So, we need to have a client-id for kubectl (the azp). And one for each kubernetes cluster (the aud).

Dex needs to make sure it supports the azp claim.

@erictune
Copy link
Member

He explained that it does not matter that the kubectl client id is not really a secret. He said "The secret is not really a secret but that does not matter too much since the security actually comes from the short lived authorization code that needs a user consent.".

@bobbyrullo
Copy link
Contributor Author

@erictune Thanks for the detailed responses and suggestions!

What's interesting is that we are in the process of supporting the azp claim and the cross-client authentication stuff (see dexidp/dex#371, and the google implementation that inspired it here )but I didn't think to use it for this particular case (our plan was to use it so that our UI Console could make API requests with the users identity)

What makes the Google Cross client stuff secure (i.e. mitigates attacks with spurious, malicious clients is that all the clients are known to each other because they are in the same Google "Project"; we are implementing something similar by explicitly requiring that Clients register what other clients they will allow to issue tokens on their behalf. In our implementation as well as Google's (as far as I understand) this registering of clients is something that is done with Client credentials, not user credentials.

So the trick is coming up with a way of creating clients initiated by a user interaction, and somehow, with the API server's and the user's consent, registering that client as one that can mint tokens for the API Server. This part it not part of any standard; in the google world, this is done through the developer using the dev console to make the clients part of the same project. For us, maybe custom APIs. But this means that this will have to happen out of band of what is in upstream Kubernetes, which is unfortunate, because it means that you can't just plug-in any OIDC IdP for K8s, you have to to this out of band setup.

Anyhow, there's probably something we can work with here, me and the rest of the team will let this digest. Thanks again for all your thoughtful comments.

@bobbyrullo
Copy link
Contributor Author

The secret is not really a secret but that does not matter too much since the security actually comes from the short lived authorization code that needs a user consent

Is he implying that all CLIs for all users should get the same client? That makes me nervous because of the stuff I mentioned here.

The attack is: someone malicious uses the not-so-secret client credentials to make a website with it that is sufficiently attractive (some game or something) that someone with will authenticate with it. Now they can use the same credentials that the user thought they were handing over just to play Angry Birds or something to control their cluster.

@erictune
Copy link
Member

erictune commented Apr 28, 2016

The attack is: someone malicious uses the not-so-secret client credentials to make a website with it that is sufficiently attractive (some game or something) that someone with will authenticate with it. Now they can use the same credentials that the user thought they were handing over just to play Angry Birds or something to control their cluster.

The kubectl client secret is not secret. The K8s API server client secret is per-cluster and is secret.
The angry birds site could not get the K8s API server client secret. So, any token it gets the IdP to issue will have claim aud: $ANGRY_BIRDS, not aud: $REAL_K8S_APISERVER.

@erictune
Copy link
Member

erictune commented Apr 28, 2016

The RP has to reject a token for the wrong aud.

@erictune erictune self-assigned this Apr 29, 2016
@philips
Copy link
Contributor

philips commented Apr 29, 2016

@erictune Can you contact explain why this solution is unsafe or bad? I am OK changing direction with the azp stuff but I want to internalize what was wrong with the original proposal.

@bobbyrullo
Copy link
Contributor Author

bobbyrullo commented May 5, 2016

We talked about this in sig-auth meeting, and we finally have something we all pretty much agree on:

  • Make a new OIDC AuthProvider (introduced with Client auth provider plugin framework #23066)
  • Users will configure their kubeconfig with a client-id, client-secret, issuer URL, and optionally, some scopes
  • The transport wrapped by WrapTransport will add ID token (if present) as a bearer token (like how it works now using the legacy token field)
  • If there's no ID Token but there is a refresh token, the wrapped transport will do a refresh with it and get a new ID token (and, potentially a new refresh token depending on the provider)

Notice that there's no mention of azp, cross-client stuff - this is intentional; not all IdPs support that and I don't think it's a good idea or necessary to enforce it. But here's the recommended deployment:

  • Register a OIDC Web Client with your IdP for the API Server
  • Register a single OIDC Native client with your IdP for CLIs as one which is allowed to mint tokens for your API Server (in Google Cloud, this means they are part of the same project)
  • Configure your users kubeconfig like so:
    • client-id, secret: the id and secret of the CLI
    • issuer-url: your IdP URL
    • scopes: add audience:server:client_id:$API_SERVER_CLIENT_ID. This is all that's necessary for the cross-client stuff to work

@liggitt @erictune - We're good with this approach yes?

Note that this handles passing along the token and refreshing it - not initial refresh token population. I'm going to do kubectl login in another issuer/PR.

@liggitt
Copy link
Member

liggitt commented May 10, 2016

That approach seems good to me.

If there's no ID Token but there is a refresh token...

Would this also check for an existing but expired ID token? What about an ID token the client still thinks is valid, but the server rejects?

Configure your users kubeconfig like so ... I'm going to do kubectl login in another issuer/PR.

When you get to that part, should probably include redirect_uri (some providers might support urn:ietf:wg:oauth:2.0:oob, others might allow http://localhost, etc) or some way to indicate whether the localhost listener approach is desired, or just open a browser and expect the tokens copy/pasted back to the CLI

@liggitt
Copy link
Member

liggitt commented May 10, 2016

The transport wrapped by WrapTransport will add ID token (if present) as a bearer token (like how it works now using the legacy token field)

I'd sort of expect the OIDC provider to populate the BearerToken field in the kubectl with the ID token for maximum interoperability (along with optionally keeping the id_token/refresh_token/client_id/etc in its own auth provider stanza)

@bobbyrullo
Copy link
Contributor Author

bobbyrullo commented May 10, 2016

Thanks for getting back, @liggitt

Would this also check for an existing but expired ID token? What about an ID token the client still thinks is valid, but the server rejects?

Yes on both counts.

When you get to that part, should probably include redirect_uri

Absolutely.

@bobbyrullo
Copy link
Contributor Author

I'd sort of expect the OIDC provider to populate the BearerToken field in the kubectl with the ID token for maximum interoperability (along with optionally keeping the id_token/refresh_token/client_id/etc in its own auth provider stanza)

Under the new AuthProvider framework, Providers don't have visibility to the entire config file; they can only see (and write to) its own stanza. @cjcullen and @deads2k both explicitly called that out in #23066 (eg, this comment) so I wouldn't want to change that without those folks buying in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.