New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create "whoami" service #30784

Open
mlbiam opened this Issue Aug 17, 2016 · 46 comments

Comments

Projects
None yet
@mlbiam

mlbiam commented Aug 17, 2016

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): Looked at the API docs. Discussed on the kubernetes/sig-auth slack channel with @ericchiang and @whitlockjc


Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

Once OIDC is integrated, it would be helpful to have an endpoint that will reply with information about the logged in user, mainly the username, uid, groups and roles. Something similar to OpenShift's /oapi/v1/users/~ service. Ideally this would return json similar to:

{
  "username":"me",
  "uid":"myuid",
  "groups":[
                  "admins",
                  "devs",
                  "etc",
                ],
  "roles":[
               "adminrole",
               "userrole",
               "etc"
            ]
}

That way someone can easily test that OIDC is setup correctly, the dashboard can show who the logged in user is and external tool developers can verify/display the logged in user.

What you expected to happen:

$ curl -H "Authorization: bearer ASDFV..." https://myapi.k8s.io/api/path/to/whoami
{
  "username":"me",
  "uid":"myuid",
  "groups":[
                  "admins",
                  "devs",
                  "etc",
                ],
  "roles":[
               "adminrole",
               "userrole",
               "etc"
            ]
}
@adohe

This comment has been minimized.

Member

adohe commented Aug 18, 2016

@smarterclayton IIRC, openshift origin has already implements this feature.

@mlbiam

This comment has been minimized.

mlbiam commented Aug 18, 2016

@adohe Sort of. Origin doesn't have an endpoint that will tell you what groups a user belongs to (at least it didn't a couple of months ago) or the roles. The groups attribute that comes back from /oapi/v1/users/~ doesn't get populated anymore.

@kargakis

This comment has been minimized.

Member

kargakis commented Aug 18, 2016

cc: @deads2k

@ericchiang

This comment has been minimized.

Member

ericchiang commented Aug 18, 2016

Note that roles are a concept unique to RBAC. They aren't a part in other authorizers and aren't appropriate for a general discovery API.

@mlbiam

This comment has been minimized.

mlbiam commented Aug 18, 2016

@ericchiang sure. I was thinking of kubernetes/dashboard#964 (comment) . For instance in OpenShift if I'm not a member of the project's admin role I can't create new pods (at least OOTB). Not sure how thats implemented in OpenShift's console but it looks like if there was a comprehensive "this is what you have access to" it would be useful.

@deads2k

This comment has been minimized.

Contributor

deads2k commented Aug 18, 2016

@ericchiang sure. I was thinking of kubernetes/dashboard#964 (comment) . For instance in OpenShift if I'm not a member of the project's admin role I can't create new pods (at least OOTB). Not sure how thats implemented in OpenShift's console but it looks like if there was a comprehensive "this is what you have access to" it would be useful.

I'm not against adding something like openshift's SelfSubjectRulesReview to the rbac group, but it would be specific to rbac, not in the generic authorization API.

I'm am opposed to exposing roles externally if there's any way to avoid it.

For details, see oc policy can-i --list --loglevel=8.

@deads2k

This comment has been minimized.

Contributor

deads2k commented Aug 18, 2016

Once OIDC is integrated, it would be helpful to have an endpoint that will reply with information about the logged in user, mainly the username, uid, groups and roles. Something similar to OpenShift's /oapi/v1/users/~ service. Ideally this would return json similar to:

It sounds like you want to hit the authentication.k8s.io/v1beta1/tokenreviews endpoint we added this release. It gives back users and groups for a token and I'd expect it to work for an OIDC token. It would not work for certificate based users.

I'm not strict against a whoami alternative to the tokenreviews endpoint, but I'd like to see how much mileage we get out of tokenreviews first.

@deads2k

This comment has been minimized.

Contributor

deads2k commented Aug 18, 2016

@kubernetes/sig-auth

@whitlockjc

This comment has been minimized.

Contributor

whitlockjc commented Aug 18, 2016

I'm not sure that a /whoami is the right place to do this. The equivalent resource for things like BitBucket, GitHub, OpenID Connect, ... do not give the group/role details for a user and instead, there are separate APIs for retrieving these details. I do like the idea of a /whoami to be able to test that your cert details and/or OIDC configuration is right, but is this the place for more details information like groups/roles/...?

My two cents.

@erictune

This comment has been minimized.

Member

erictune commented Aug 19, 2016

Does #30829 satisfy this request? Just do any operation on the API (whether you have permission or not) with valid credentials and you get back your username in a header.

@mlbiam

This comment has been minimized.

mlbiam commented Aug 19, 2016

@erictune

This comment has been minimized.

Member

erictune commented Aug 19, 2016

I regret the decision to all authenticators to provide group information.

On Fri, Aug 19, 2016 at 7:32 AM, Marc Boorshtein notifications@github.com
wrote:

Not entirely. I'd also want to make sure the groups are being loaded
properly and #30829 only tells me the user. Its a start, but
as @ericchiang pointed out putting all the groups in a response header can
get big quickly if you have a large environment.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
#30784 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudnk9R557WAYRGeVzWk5wHdCNfKDkks5qhb5ugaJpZM4JmsMt
.

@mlbiam

This comment has been minimized.

mlbiam commented Aug 19, 2016

I regret the decision to all authenticators to provide group information.

where else would it come from? k8s has not internal store for az rules so
the az information has to come from somewhere. I think its a great model.
Its one I've used on countless other applications over the years to great
success. As an external tool builder, this also makes it much easier to
create security tools for k8s. I don't have to implement a new connector
to provision access, i can provide that access by storing the groups in
either ldap or a db or pick-your-favorite-place.

@ericchiang

This comment has been minimized.

Member

ericchiang commented Aug 19, 2016

@deads2k

This comment has been minimized.

Contributor

deads2k commented Aug 19, 2016

@mlbiam #23720

While I wasn't against that issue, I do find having a clean break in user.Info after authentication quite nice. It makes reasoning about what's going to happen very easy.

where else would it come from? k8s has not internal store for az rules so
the az information has to come from somewhere.

@mlbiam There is a reasonable claim that loading group membership can be expensive, particularly for large numbers of groups and that an authorizer could optimize the look up in some cases based on likelihood of needing particular bits of information or which bits of knowledge are locally available.

I don't think I'd personally want to try to do it (I think it makes it harder to reason about the overall system), but I'm not prepared to say that such a system is unreasonable.

@mlbiam

This comment has been minimized.

mlbiam commented Aug 19, 2016

Thanks @ericchiang for the context. In light of that, I think its even more important to have something that lists "whoami" to the client if only for debugging. If you're adding another layer to the process then you need to make sure that k8s is seeing your user the way you intended with that many moving parts. You could potentially have an OIDC idp for user information and ldap server for authorizations (as an example). Thats a similar model to what we use in OpenUnison. It works great but can get complicated very easily so having something like this would make that debugging process easier.

@deads2k

This comment has been minimized.

Contributor

deads2k commented Aug 19, 2016

@mlbiam determining group membership when authorization can actually change which groups are loaded becomes non-trivial. If that proposal is ever implemented, then there's no guarantee that the groups available in the user.Info are ever "complete".

We may not know the answer server-side to return to you.

@mlbiam

This comment has been minimized.

mlbiam commented Aug 19, 2016

@deads2k Thanks for the thoughts. I think both are a good idea. If an IdP can limit the groups it sends in the jwt then use that. Otherwise I could see using an external az plugin.

As to if its "complete", understood. though i wouldn't say thats a reason not to have a "whoami". different use cases would use this in different ways. I understand it can get very complicated on multiple fronts quickly, so the more tools to help debug the better.

@spiffxp

This comment has been minimized.

Member

spiffxp commented Jun 23, 2017

/sig auth

@jimmycuadra

This comment has been minimized.

Member

jimmycuadra commented Jul 18, 2017

What is the current workaround for this? I'm trying to debug some auth issues for a developer on my team and I'm not sure how to get the user and groups as the server is seeing them.

@ericchiang

This comment has been minimized.

Member

ericchiang commented Jul 19, 2017

@jimmycuadra the RBAC debug logs or audit logs are the current way.

@fejta-bot

This comment has been minimized.

fejta-bot commented Apr 3, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

fejta-bot commented May 3, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@BenTheElder

This comment has been minimized.

Member

BenTheElder commented Jun 28, 2018

test-infra recently ran into this with the Prow setup guide where we expect users are creating a new cluster and need to create RBAC rules kubernetes/test-infra#8499, we're working around this by suggesting platform specific answers in combination with "you can get your username from the error message" ...

Can we reconsider adding an actual kubectl whoami? It seems like a useful API to have and in the meantime as #30784 (comment) mentions you can obtain the username from error output with other commands anyhow which imo presents the same information under a bad UX..

@BenTheElder

This comment has been minimized.

Member

BenTheElder commented Jun 28, 2018

/reopen

@BenTheElder

This comment has been minimized.

Member

BenTheElder commented Jun 28, 2018

/remove-lifecycle rotten

@munnerz

This comment has been minimized.

Member

munnerz commented Jun 29, 2018

Given kubectl now has kubectl auth can-i, and given we already return the username of a request in error messages, it seems odd we don't have the generic whoami returning just a username at least already.

Above, it seems the concerns were more focused on the can-i behaviour, but now that is supported, it seems like whoami is a natural extension.

Is there any substantive reason/potential attack that we are trying to mitigate by not adding this? If not, it'd be awesome to get an idea of the next steps required (i.e. a proposal for the new API endpoint) so we can get started on this!

If there are valid security concerns, then I think this issue should be addressed sooner rather than later, as developers are already beginning to depend on these error messages in order to detect usernames, and the longer a 'feature' exists, the harder it'll be to remove it 😄

@BenTheElder

This comment has been minimized.

Member

BenTheElder commented Jul 3, 2018

@mikedanese @liggitt any thoughts? I'd love to help push this forward again if there are no current objections.
As @munnerz says it seems like we should do this now that we have kubectl auth can-i

@liggitt

This comment has been minimized.

Member

liggitt commented Jul 3, 2018

it seems like we should do this now that we have kubectl auth can-i

I don't see the connection between the two... that API does not expose everything a whoami service would...

I'm on the fence about committing to exposing this info via an API. The human readable denial forbidden message encountered as part of a denial is subject to change, and could be masked if deemed too sensitive a response. For cluster-admin debugging purposes, the audit log is preferable to a user-facing API.

@BenTheElder

This comment has been minimized.

Member

BenTheElder commented Jul 3, 2018

@liggitt Fair enough...

This is not my usual domain, admittedly, but could we place this API behind a policy then? I'd guess for most deployments usernames are not very concerning compared to scanning stolen credentials with can-i or even just trying to deploy something nasty without checking for access...

For cluster-admin debugging purposes, the audit log is preferable to a user-facing API.

I think a lot of Kubernetes users are "cluster admins" permission-wise but are not quite experienced enough to configure audit logging and go look through the audit logs, and obtaining the logs seems like it is always going to be less portable than kubectl auth whoami.

@SEJeff

This comment has been minimized.

Contributor

SEJeff commented Jul 3, 2018

I'm on the fence about committing to exposing this info via an API. The human readable denial forbidden message encountered as part of a denial is subject to change, and could be masked if deemed too sensitive a response. For cluster-admin debugging purposes, the audit log is preferable to a user-facing API.

@liggitt Could said api not be wrapped with rbac and sensible default policy shipping upstream? This is a problem most anyone who creates roles, bindings, or service accounts is going to deal with, and not all who create roles or bindings (or service accounts for their namespaces) have or want cluster admin privileges. It generally isn't considered a security hole in 'nix for local users to use whoami or various incantations of getent passwd / getent group. In fact, it is required for any 'nix system to function normally. Having an api for something virtually everyone is going to want is certainly preferable over scraping error messages.

How is kubernetes different from a security perspective compared to Unix when the user just wants to know their already authenticated username / groups? Unauthenticated users would just get something akin to system:unauthenticated or nothing at all depending on cluster rbac policy. Your apprehension for something that could be security sensitive is good, but I'm not sure it is warranted for this specific issue.

What do you think?

@liggitt

This comment has been minimized.

Member

liggitt commented Jul 3, 2018

Could said api not be wrapped with rbac and sensible default policy shipping upstream?

Unless the default policy exposed it to all users, it’s unlikely to be broadly useful, right?

This is a problem most anyone who creates roles, bindings, or service accounts is going to deal with

Can you clarify what you mean by that? The circumstances in which someone 1) needs to grant themselves permissions, 2) has the ability to grant themselves permissions, and 3) doesn’t know their own username seem very narrow. I also am not seeing how this would help a service account creator.

@BenTheElder

This comment has been minimized.

Member

BenTheElder commented Jul 3, 2018

Unless the default policy exposed it to all users, it’s unlikely to be broadly useful, right?

Owners / admins that don't want to use audit logs can still use this, other providers / admins can choose to restrict it? It's also easier to tell someone how to do and portable across providers (that haven't restricted it), especially for less experienced users.
I think the average cluster admin is not concerned with credential <-> username obsfucation, but something like whoami has simple debug value. For my clusters at least I'd prefer access monitoring / protecting the credentials themselves, the usernames can be obtained / guessed via other means.

Can you clarify what you mean by that? The circumstances in which someone 1) needs to grant themselves permissions, 2) has the ability to grant themselves permissions, and 3) doesn’t know their own username seem very narrow. I also am not seeing how this would help a service account creator.

Not @SEJeff but: 1,2,3 happen especially for anyone using a "push button" cluster without knowing the details of the identity integration. The upstream docs on identity / auth give plenty of details for users familiar with authentication but not much helpful for the Kubernetes "consumer", for that everyone has to point to identity integration specific docs which is difficult for provider-agnostic tools.

For those users I could provide tooling to automate their setup including RBAC rules etc., but right now they'll have to find their username first on their own.

As multi-"machine" (=cluster) and multi-user as kubectl context support is, I was quite surprised kubectl whoami didn't already exist when I had a use for it. (Helping a user set up RBAC rules for deploying some services to their cluster without access to said cluster).

@yue9944882

This comment has been minimized.

Member

yue9944882 commented Jul 6, 2018

😃Didn't realize this feature requests have already been raised years ago.

For those users I could provide tooling to automate their setup including RBAC rules etc., but right now they'll have to find their username first on their own.

This is the very pain-point when I'm trying to set up RBAC rules for myself. Currently information about user's identity is not very sensitive if we could only get the one about ourselves. Also, client identity is mostly stored in unsecured places, like x509 public key. Worrying about the security here might be unnecessary IMHO.

@redbaron

This comment has been minimized.

Contributor

redbaron commented Aug 2, 2018

Does it need to be a server endpoint ? I'd be fine if kubectl auth whoami just extracted user information from credentials it is configured to use to send requests and then present it to stdout lets say in the same form as RoleBinding.subjects array elements.

@yue9944882

This comment has been minimized.

Member

yue9944882 commented Aug 2, 2018

Does it need to be a server endpoint ?

@redbaron i'm afraid so. because we don't know how the apiserver extracts user info from authn context. even though now all the information could be found from "CN" and "O" of x509 client cert field currently, it doesn't mean we will always follow this track. am i right?

xref #66033

@fejta-bot

This comment has been minimized.

fejta-bot commented Oct 31, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@redbaron

This comment has been minimized.

Contributor

redbaron commented Oct 31, 2018

/remove-lifecycle stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment