New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Authentication and Authorization support for Tiller #1918

Closed
tamalsaha opened this Issue Feb 6, 2017 · 43 comments

Comments

Projects
None yet
@tamalsaha
Copy link
Contributor

tamalsaha commented Feb 6, 2017

Goal

For Tiller to execute all of its API operations "as" the user. So that (a) Tiller is not a "superuser" and (b) users' own ids, etc. are attached to operations.

Architecture

The general idea is Helm cli will forward user’s auth info from kubeconfig to Tiller gRPC server via HTTP Header. Tiller gRPC server will call various Kubernetes authN/Z apis to perform authentication and authorization in an intercepter before applying mutation. The following diagram shows the flow:

tiller-auth-flow

Kube Client Authentication

The TokenReview api call will be made using default service account. This is required, since if RBAC authorization is enabled in the cluster, user may not have permission to perform TokenRview api.

All other kube api calls (including SubjectReview api) will be made using user’s auth info.

User info

  • Client Certs: If a client certificate is presented and verified, the common name of the subject is used as the user name for the request.
  • Basic Auth: Username found in the Authorization header.
  • Bearer Token: Tiller calls TokenReview api with the bearer token found in the authorization header. In response, Kube api server returns username and list of groups for the user associated with the token.

Anonymous requests

From Kubernetes docs, "When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated." I propose that Tiller does not support anonymous requests.

Choice of Header Keys

The following keys (case-sensitive) will be used in golang context to forward auth info from kubeconfig to gRPC server:

  • authorization: This will be available as This is the standard authorization header to pass username:password or bearer token.
  • k8s-server: This is the server field in kubeconfig.
  • k8s-certificate-authority: This is the CA certificate used by kubernetes api server.
  • k8s-client-certificate: This is the user’s client certificate.
  • k8s-client-key: This is the user’s client key.

HTTP protocol itself has no limit on header length. So, forwarding the above fields should be ok.

Discovering Tiller Server

We would like to support running Tiller in Standalone mode or Hosted mode. In Standalone mode, Tiller manages the cluster it is running on. This is the current setup. In Hosted mode, Tiller will be run outside the cluster that it manages. To support auto discovery of Tiller server address, we propose the following change:

If tillerHost is empty, first search for a service named “{"app": "helm", "name": "tiller"}” in default tiller namespace (kube-system). If found, use the externalName in the service to connect to an externally Hosted Tiller server. If not found, then use the existing port forwarding mechanism to connect to cluster.

{
    "kind": "Service",
    "apiVersion": "v1",
    "metadata": {
        "name": "my-service",
        "namespace": "prod"
    },
    "spec": {
        "type": "ExternalName",
        "externalName": "my.tiller.example.com"
    }
}

Misc Considerations

JSON API

If users are interested in accessing the Tiller api from a web page, they have to run gRPC Gateway to provide a JSON api.

Release Authorization

An additional layer of authorization may be added to Tiller gRPC server using a Third Party Resource called Release. Cluster admins can configure RBAC authorization for the Release resource. Tiller api server will perform an extra SubjectReview call to check if the user has permission to release a chart:

{
  "apiVersion": "authorization.k8s.io/v1beta1",
  "kind": "SubjectAccessReview",
  "spec": {
    "resourceAttributes": {
      "namespace": "kittensandponies",
      "verb": "POST",
      "group": "helm.sh",
      "resource": "release"
    },
    "user": "jane",
    "group": [
      "group1",
      "group2"
    ]
  }
}

This can be added later, once the initial above proposal is implemented.

Storing Release Data

  • Standalone mode: Currently Tiller stores data in configmaps. I think we should store release data in a TPR to avoid confusion. The Release object used for authorization can also be used for storing release data. This is also inline with discussions on api extension.

  • Hosted mode: A Storage Driver can be implemented to store data in conventional databases like Postgres, etc. It should be very easy for implement this using existing Driver interface.

Both of these options can be implemented after the initial auth flow is implemented.

Third Party Controller

A TillerC TPR controller was designed, prototyped and ultimately abandoned because Kubernetes authZ process can’t extended to fit Tiller’s authorization requirements.

@tamalsaha tamalsaha changed the title Proposal: Authenticator and Authorization support for Tiller Proposal: Authentication and Authorization support for Tiller Feb 6, 2017

@technosophos

This comment has been minimized.

Copy link
Member

technosophos commented Feb 8, 2017

I very much like this proposal.

I would imagine doing this as four separate PRs:

  • Adding the authentication mechanisms
  • Adding the SubjectReview
  • "Discovering Tiller Server" (could be done any time)
  • Adding the TPR for release (possibly delayed a bit)

Okay, a few other comments:

  • For the "Discovering Tiller Server" feature, we could also look into storing config data in a flat file in $HELM_HOME, if that closer hits the intended use case.
  • I totally agree on disallowing anonymous requests
  • I would prefer delaying the Release TPR until TPR comes out of Beta (or at least until TPRs can actually be updated). The system keeps changing, and that would add a big backwards compatibility burden.
  • What are the backward compatibility issues that we might hit by introducing authentication?
@tamalsaha

This comment has been minimized.

Copy link
Contributor Author

tamalsaha commented Feb 9, 2017

For the "Discovering Tiller Server" feature, we could also look into storing config data in a flat file in $HELM_HOME, if that closer hits the intended use case.

I think supporting it via a service in Kube will be better because, users don't need to worry about one more extra config.

I would prefer delaying the Release TPR until TPR comes out of Beta (or at least until TPRs can actually be updated). The system keeps changing, and that would add a big backwards compatibility burden.

I think there is a "security" benefit of using TPR. As things work today, it is possible for someone to overwrite coinfigmaps and then rollback to deploy a bad version. With TPR, cluster admins can restrict regular users from writing Tiller TPR but allow them to edit configmaps. But these can be done as step 2.

What are the backward compatibility issues that we might hit by introducing authentication?
I don't see how to pass the modified ctx to StreamHandler, which is used for ListRelease api.

One thing, that is not discussed so far is how to store the user info. I was thinking about storing that as a Label in the configmap and them display it. This will also require adding username field to proto.

@technosophos

This comment has been minimized.

Copy link
Member

technosophos commented Feb 10, 2017

Just want to record that I have been getting a lot of supportive feedback in Slack and in other places about this particular feature as defined here. This is something a wide variety of users are asking for in the 2.3.0 release.

@technosophos technosophos added this to the 2.3.0 milestone Feb 14, 2017

@technosophos

This comment has been minimized.

Copy link
Member

technosophos commented Feb 14, 2017

I've added this to the 2.3 milestone. @tamalsaha I am assuming this is a feature you are planning on leading.

@smarterclayton

This comment has been minimized.

Copy link

smarterclayton commented Feb 15, 2017

Are you expecting tiller to have a different set of client certificates than the kubernetes master? Or the same?

In this case tiller is acting as an authenticating proxy - would be good to make sure we have a checklist for the sorts of things that are "safe" and are audited. Such as never storing the token, never writing it to logs, etc. Some of the bearer tokens will be very powerful (cluster-admin or highly privileged users).

@smarterclayton

This comment has been minimized.

Copy link

smarterclayton commented Feb 15, 2017

If you have the user's token, why do you need to do SubjectAccessReview for mutations? The Kube APIs will already do that.

@liggitt re: use of impersonation or not.

@tamalsaha

This comment has been minimized.

Copy link
Contributor Author

tamalsaha commented Feb 15, 2017

Are you expecting tiller to have a different set of client certificates than the kubernetes master? Or the same?

I am expecting them to be same.

In this case tiller is acting as an authenticating proxy - would be good to make sure we have a checklist for the sorts of things that are "safe" and are audited. Such as never storing the token, never writing it to logs, etc. Some of the bearer tokens will be very powerful (cluster-admin or highly privileged users).

Sounds great!

If you have the user's token, why do you need to do SubjectAccessReview for mutations? The Kube APIs will already do that.

Calling SubjectAccessReview before doing any mutation will avoid failing in the middle of a release. Also, if TPRs are used to store Release data (instead of configmaps), I think SubjectAccessReview will be needed to enforce authZ.

@technosophos

This comment has been minimized.

Copy link
Member

technosophos commented Feb 16, 2017

Assigned this to @tamalsaha as the feature owner.

@liggitt

This comment has been minimized.

Copy link

liggitt commented Feb 17, 2017

Calling SubjectAccessReview before doing any mutation will avoid failing in the middle of a release

Not necessarily, there are plenty of other reasons requests could fail. You need to handle partial success already.

@tamalsaha

This comment has been minimized.

Copy link
Contributor Author

tamalsaha commented Feb 17, 2017

Not necessarily, there are plenty of other reasons requests could fail. You need to handle partial success already.

Yes. Even the diff computation can fail. Is there any technical reason to avoid SubjectReview calls if possible (say, too much traffic for apiserver, etc.)?

@liggitt

This comment has been minimized.

Copy link

liggitt commented Feb 19, 2017

Basic Auth: Username found in the Authorization header.

can't assume the login name is identical to the authenticated username. some auth systems allow multiple logins (bob, BOB, bob@domain) to map to a single username. Also, to impersonate, you need to obtain and impersonate all the user info: username, group membership, and "extra" data (could hold scope info, other info used by the authorizer, etc). I'm not sure how you plan to obtain that info from basic auth credentials.

k8s-client-key: This is the user’s client key.

is this proposing sending the content of the private key to the server so the server can replay as the user?

One thing, that is not discussed so far is how to store the user info. I was thinking about storing that as a Label in the configmap and them display it.

Is this referring to storing the resolved user information (username, groups, extra data, etc) or the user credentials themselves?

User info (especially if used for replay) should only be stored in a way that guarantees its integrity. Some ways to do that are:

  1. not allow API write access to the fields storing the info, but make the server responsible for stomping the data based on the authenticated user (not possible to change the configmap REST handler to do that)
  2. include additional information that can be used to verify the integrity of the data (e.g. a peer HMAC in the configmap that can be verified using privately held keys)

I think user info stored in mutable API fields is likely to be used without proper verification, allowing someone to modify it and perform escalation attacks.

This will also require adding username field to proto.

All aspects of the user info - username, groups, and "extra" fields (see the SubjectAccessReview API)

@tamalsaha

This comment has been minimized.

Copy link
Contributor Author

tamalsaha commented Feb 19, 2017

Great feedback, @liggitt ! The wip pr is here: https://github.com/kubernetes/helm/pull/1932/files#diff-8f4541a8ec0fb2b3143ca168f7083f17R171

Basic Auth: Username found in the Authorization header.
can't assume the login name is identical to the authenticated username. some auth systems allow multiple logins (bob, BOB, bob@domain) to map to a single username. Also, to impersonate, you need to obtain and impersonate all the user info: username, group membership, and "extra" data (could hold scope info, other info used by the authorizer, etc). I'm not sure how you plan to obtain that info from basic auth credentials.

This is new information to me. How does Kube api server handle it?

k8s-client-key: This is the user’s client key.
is this proposing sending the content of the private key to the server so the server can replay as the user?

Yes. Is there a better way when users are using client cert auth?

One thing, that is not discussed so far is how to store the user info. I was thinking about storing that as a Label in the configmap and them display it.
Is this referring to storing the resolved user information (username, groups, extra data, etc) or the user credentials themselves?

User info (especially if used for replay) should only be stored in a way that guarantees its integrity. Some ways to do that are:

not allow API write access to the fields storing the info, but make the server responsible for stomping the data based on the authenticated user (not possible to change the configmap REST handler to do that)
include additional information that can be used to verify the integrity of the data (e.g. a peer HMAC in the configmap that can be verified using privately held keys)
I think user info stored in mutable API fields is likely to be used without proper verification, allowing someone to modify it and perform escalation attacks.

The user info is not used for replay. This is just information stored to later display who performed a release. But data integrity is still important. We can use HMAC, etc to verify that the username has not changed. But what to do if someone changes it? I don't see how can we stop that from happening. Currently everything is stored inside configmaps. We have considered storing it in a Helm specific TPR. But either way someone can can change the underlying data by calling the Kube api server (assuming they have authorization).

This will also require adding username field to proto.
All aspects of the user info - username, groups, and "extra" fields (see the SubjectAccessReview API)

Yes. We are keeping everything related to UserInfo, so that it can be sent to SubjectReview.

@liggitt

This comment has been minimized.

Copy link

liggitt commented Feb 19, 2017

I'm not sure how you plan to obtain that info from basic auth credentials.

This is new information to me. How does Kube api server handle it?

The ways the kube api server accepts basic auth info:

  • the --basic-auth-file (where the presented login name is the username), and the file contains the user's groups
  • via an auth proxy, where the proxy resolves the user info using the presented credentials and passes username/groups/extra fields to kube via headers

Tiller would need to resolve user info from basic auth credentials using the same mechanisms, I suppose.

is this proposing sending the content of the private key to the server so the server can replay as the user?

Yes. Is there a better way when users are using client cert auth?

Request and verify a presented client certificate using the same CA as the API server and extract username/group info from the presented cert. It is not proper to send private key content to the server in a header.

The user info is not used for replay.

Then I'm missing where the user credentials are stored for later replay.

@tamalsaha

This comment has been minimized.

Copy link
Contributor Author

tamalsaha commented Feb 19, 2017

I'm not sure how you plan to obtain that info from basic auth credentials.

This is new information to me. How does Kube api server handle it?

The ways the kube api server accepts basic auth info:
the --basic-auth-file (where the presented login name is the username), and the file contains the user's groups
via an auth proxy, where the proxy resolves the user info using the presented credentials and passes username/groups/extra fields to kube via headers
Tiller would need to resolve user info from basic auth credentials using the same mechanisms, I suppose.

I see. I was hoping that we could avoid reimplementing the full auth process of kube api server. Is there any easier way to achieve this? Like use the kube api server auth process as a library so that we don't have to re-implement it.

is this proposing sending the content of the private key to the server so the server can replay as the user?

Yes. Is there a better way when users are using client cert auth?

Request and verify a presented client certificate using the same CA as the API server and extract username/group info from the presented cert. It is not proper to send private key content to the server in a header.

I agree with you. But without sending the client private key how can we create an authenticated Kube Client using user's credential in Tiller?

The user info is not used for replay.

Then I'm missing where the user credentials are stored for later replay.

So, the user info is used for the duration for a single api call to Tiller. One api call to Tiller will result in multiple calls to Kube api using user's credentials. At the end of a call, the username is stored in configmaps for record keeping purposes. This username is not reused for replay in any later api calls to Tiller.

@tamalsaha

This comment has been minimized.

Copy link
Contributor Author

tamalsaha commented Feb 20, 2017

I had another round of conversation with @liggitt . You can find the transcript here: https://docs.google.com/document/d/19Bxu1TMEOErP81E4kUt6zIoms0_3_O5lvPhEbzxYdQ8/edit?usp=sharing

Clarifications:

Token:
Tiller calls TokenReview as super user (auto mounted S/A) with user's token. That returns username, groups and extra info. Then Tiller uses user's token to call SelfSubjectAccessReview and subsequent release operations.

Client Cert:
Tiller is configured with the same CA Kube api server uses. Helm uses client cert to authenticate (does not require sending client private key). Tiller now knows user's username, group and extra info. Now, Tiller now creates a new kube client using super user credentials (default mounted S/A). Tiller calls SubjectAccessReview to check authz. If all mutations are authorized, Tiller now creates a new kube client using super user credentials (default mounted S/A) and impersonate username. Tiller now performs actual release operations using the impersonated kube client.

Basic Auth:
In this mode, user sends you basic auth. Tiller makes SelfSubjectAccessReview and API calls using that credential. Tiller can determine authorization and perform the actions. But you can't know the user name, because the username in the basic auth might not be the actual username. For all Tiller knows, basic auth isn't even enabled and the actions Tiller were performing are allowed to be done by system:anonymous.

Auth Proxy
Authentication proxy option is not used by Helm cli. This requires setting flags in Kube api server. That will break the existing UX for Tiller deployment. But according to @liggitt, for composeability, letting tiller also receive user info via an auth proxy would be ideal. (That's part of the example server at https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/server/options/recommended.go#L38-L47). This lets tiller play nicely in a world where other servers delegate to it. User info comes in in headers, verified by client cert. Handling would be identical to the client cert case, user info just comes from a different source.

@technosophos

This comment has been minimized.

Copy link
Member

technosophos commented Feb 23, 2017

This issue seems to have diverged substantially from the original proposal. I think we'll need to re-assess whether this is feasible in the 2.3.0 timeframe.

Okay... I'll clear up a few details:

  1. Helm 2.3.0 is not considered backwards compatible with Kubernetes 1.4. It will be considered compatible with Kubernetes 1.5
  2. Any implemented solution still must provide users with the option to run Tiller with the authentication mode it uses today (basically inheriting Tiller's credentials).
  3. Tiller contains the kubectl client auth library already, so it is unlikely that things will need to be re-implemented from scratch unless the Go APIs are not exported.

I lost the thread on this one part of the argument. Maybe you can clarify it. We had initially discussed proxying credentials (regardless of type) through Tiller. So Tiller would merely accept the credentials from the client per request, and add those to the Kubernetes API client (e.g. via the existing kubectl library). No part of these credentials would be written to storage. Only a user name field (if present) would be allowed to be written to logs or stored in the release record, and this is merely for bookkeeping (never replays or verifications).

Now it seems that the proposal would include implementing a built-in authentication system inside of Tiller that would take responsibility for validating credentials. Why?

I would rather punt all verification (including SubjectAccessReview) to the Kube API server rather than turn Tiller into an authentication/authorization verification service.

So I think somewhere along the line I lost the train of the argument.

@tamalsaha

This comment has been minimized.

Copy link
Contributor Author

tamalsaha commented Feb 24, 2017

This issue seems to have diverged substantially from the original proposal. I think we'll need to re-assess whether this is feasible in the 2.3.0 timeframe.

I have put what I understood from my discussions with @liggitt in the above comment. We are still working based on the original proposal. I think we might come up with some follow up tasks based on the above comment for future versions of Kubernetes. So, I think auth should be possible in time for 2.3 release.

Okay... I'll clear up a few details:

Helm 2.3.0 is not considered backwards compatible with Kubernetes 1.4. It will be considered compatible with Kubernetes 1.5

Great! Then there are no Kube api availability related issues as far as I see.

Any implemented solution still must provide users with the option to run Tiller with the authentication mode it uses today (basically inheriting Tiller's credentials).

Agree.

Tiller contains the kubectl client auth library already, so it is unlikely that things will need to be re-implemented from scratch unless the Go APIs are not exported.

I lost the thread on this one part of the argument. Maybe you can clarify it. We had initially discussed proxying credentials (regardless of type) through Tiller. So Tiller would merely accept the credentials from the client per request, and add those to the Kubernetes API client (e.g. via the existing kubectl library). No part of these credentials would be written to storage. Only a user name field (if present) would be allowed to be written to logs or stored in the release record, and this is merely for bookkeeping (never replays or verifications).

Now it seems that the proposal would include implementing a built-in authentication system inside of Tiller that would take responsibility for validating credentials. Why?

Basic and Bearer auth stays mostly as-is in the original proposal (both authN / authZ parts). The new questions are in the following scenarios:

  • Client Cert Auth: This is probably the biggest question based on my conversation will @@liggitt. @liggitt's strong recommendation is to not forward client secret keys to Tiller. Rather give Tiller the CA cert used to issue the client cert and validate using TLS protocol checks. In that scenario, the deployment process will be something like this:

    • Helm reads the CA cert from kubeconfig, creates a secret/configmap with that ca cert and mounts that inside Tiller.
    • Tiller can't use the port forwarding mechanism to connect to Tiller as is does today (I am not really sure). Tiller is exposed to the world directly using a Kube LoadBalancer type service. Tiller might also consider running this as a HostNetwork: true pod, so that it can be accessed on a cluster that does not support LoadBalancer type.
    • Local development and CI tests will get complicated, since you have to configure TLS certs locally.

But we are also forwarding tokens ans passwords in bearer/basic auth modes. Granted that client certs are difficult to change, how much of an issue is this? This is where I think Helm team needs to give their feedback.

  • CRL support: Depending on client cert is implemented, Tiller probably have to handle certification revocation checks. This is a TODO for future release.

  • Auth proxy: If Tiller wants to be a composable UAS, it needs to support auth proxy option. Given UAS is not still out, I think this is mostly a TODO that Helm team can decide at a later time.

I would rather punt all verification (including SubjectAccessReview) to the Kube API server rather than turn Tiller into an authentication/authorization verification service.

So I think somewhere along the line I lost the train of the argument.

All authZ still handled by Kube api server. Tiller calls Kube with the authenticated client to check for these validations.

@technosophos

This comment has been minimized.

Copy link
Member

technosophos commented Feb 27, 2017

How come proxying the SSL cert is frowned upon? Keep in mind that we already have SSL/TLS security between Helm CLI and Kube API. And can also enable HTTP/2 (with TLS) on the gRPC.

@liggitt

This comment has been minimized.

Copy link

liggitt commented Feb 27, 2017

How come proxying the SSL cert is frowned upon?

Requiring uploading a private key is not proxying, and it compromises the confidentiality of the credential in a way that runs counter to what most PKIs would expect.

We should allow each credential to be used against tiller in its natural form... for bearer tokens and basic auth, that allows tiller to replay that credential. For client certificates, it does not.

@technosophos

This comment has been minimized.

Copy link
Member

technosophos commented Feb 27, 2017

Yeah, I suppose I agree that this really is not the intended use case for a PKI architecture. But the proposed solution seems unduly complex, and still doesn't quite solve the problem.

The confusion here is that the client can authenticate to Tiller with an SSL certificate (via gRPC, though appears to be disabled at the moment. I'll check on it.). We don't need a second mechanism to achieve that. What we are after is establishing that Tiller is correctly operating against the Kubernetes API with the "identity" (or at least same authz rules) of the remote user.

Ideally, the client who is configured with a client SSL cert could negotiate what amounts to a bearer token with the kube api server, and pass that token on to Tiller. Is that possible already, @liggitt ?

Barring that, my inclination is to not support client-side SSL certificates in version 2.3.0.

@liggitt

This comment has been minimized.

Copy link

liggitt commented Feb 27, 2017

Ideally, the client who is configured with a client SSL cert could negotiate what amounts to a bearer token with the kube api server, and pass that token on to Tiller. Is that possible already, @liggitt ?

No, kube has declined to act as a token issuer for tokens representing end users.

@kfox1111

This comment has been minimized.

Copy link

kfox1111 commented Aug 10, 2017

@seh

This comment has been minimized.

Copy link
Collaborator

seh commented Aug 10, 2017

@technosophos, I can't tell whether any of the options considered involved using impersonation. We've discussed why that approach is challenging, and it's helpful to spell out why it's not an ideal solution to pursue.

Suppose that Tiller could just reuse all of the authentication machinery present in the Kubernetes API server. If it could, it could authenticate the client as a given user (such as the person invoking the helm client), and turn around and impersonate that user—together with his group membership and "extra" authentication details—as a Tiller-specific service account. That service account would need to be authorized to impersonate that user. Under such impersonation, Tiller could do no more than the impersonated user would be able to do.

The downsides to this approach are as follows:

  • An administrator would either have to configure Tiller's authentication subsystem just like the related cluster's API servers, or could deliberately choose to configure it differently, but would still need to worry about the integrity of two authentication configurations.
  • Tiller would take on the same authentication burden as the API server, including calling out to servers for Webhook token reviews and fetching fresh OIDC key sets.
  • Administrators would need to configure impersonation permissions for Tiller, which could fall out of step with the set of users, groups, and namespaces. In other words, it's another maintenance burden, even if it does give more precise control over who can use Tiller.
  • It doesn't address the problem of one user being able to delete another user's installed chart, even if those two users have no intersection in their mutative permissions within the cluster. Tiller still has to reckon with the difference between the right to do something to an installed chart release (e.g. delete a release) and the right to touch the Kubernetes objects "contained" in that release (e.g. delete a ConfigMap created by that release).

I apologize if this is restating one of the options above, just using different terms.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jan 7, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@mumoshu

This comment has been minimized.

Copy link
Contributor

mumoshu commented Jan 10, 2018

@seh @technosophos Hi! Although it ended up as an unideal solution, please let me 👍 to the one involving impersonation, and highlight this part of the discussion:

Tiller would take on the same authentication burden as the API server, including calling out to servers for Webhook token reviews and fetching fresh OIDC key sets.

I'm not really clear about that but isn't it apiserver but not tiller who calls out to a webhook token authnz server for a token review?

If that's the case, I suppose what Tiller would do given an user's brearer token is only to populate ImpersonateConfig which is then passed to kubernetes/client-go for impersonation?

Anyway, the webook token review thing would be beneficial for me.

The context is that I'm relying on heptio/authenticator which is a webhook token authentication server who validates a user-provided bearer-token with AWS IAM, so that any IAM user with an appropriate IAM role can authenticate against K8S API.

If Tiller supported impersonation with an user-provided bearer token (perhaps via something like helm apply -f ... --token $mybearertoken?) while supporting Webhook token authn, we can completely freed from managing PKI for securing helm<->tiller communication. This is a huge win from a perspective of a cluster admin.

Thanks!

Edit 1: Sorry but I had misunderstood how impersonation in k8s works. For impersonation, we have to allow Tiller to impersonate any user/group/etc w/ RBAC. That would be too much permission plus it doesn't involve the user's bearer-token at all :) "Impersonation" in this case should be done by Tiller just replays an user-provided bearer token, instead of utilizing k8s's impersonation mechanism.

Edit 2: RBAC does seem to allow restricting who can impersonate as who:

@whereisaaron

This comment has been minimized.

Copy link
Contributor

whereisaaron commented Jan 11, 2018

OT: Interesting @mumoshu! There are quite few moving parts to configuring heptio/authenticator, are you able to deploy it with just a kube-aws cluster.yaml? (If so I'd love to see an example!) Or do you need extra steps?

@mumoshu

This comment has been minimized.

Copy link
Contributor

mumoshu commented Jan 15, 2018

Hi @whereisaaron Yes, it is definitely possible. I'll share it once ready 👍

@mumoshu

This comment has been minimized.

Copy link
Contributor

mumoshu commented Jan 15, 2018

For anyone interested, I'm working on my POC to delegate authn/authz w/ k8s RBAC via tokens to kube-rbac-proxy.

How it works

  • The modified version of helm assumes you to rely on a token saved in your kubeconfig to authenticate against k8s api. Then, helm with --experimenta-rbac-proxy reads the token from kubeconfig, passes it along with a tiller gRPC call(inside metadata i.e. http/2 request header).

  • kube-rbac-proxy calls SubjectAccessReviews K8S API to authn with the token, authz with the "resource attributes" specified to kube-rbac-proxy.

  • Once the authn/authz passes on kube-rbac-proxy, it modifies the original http/2 request to include x-forwarded-user and x-forwarded-groups to be used by tiller.

  • tiller with --experimental-rbac-proxy then trusts the forwarded header so that it allows the access to the requested tiller API.

Note that you MUST protect your tiller container from anything other than accesses from the kube-rbac-proxy sidecar for now. kubectl-exec into the tiller should be prohibited, kubectl-port-forward to the tiller grpc port should be prohibited, etc.

If I could manage to add tls support for the kube-rbac-proxy mode, perhaps all of the above operations but kubectl-exec could be permitted?

I'm not really an expert in this area but am willing to learn. Please let me know anything you have in mind about this!

Thanks.

@mumoshu

This comment has been minimized.

Copy link
Contributor

mumoshu commented Jan 16, 2018

I've put some time to investigate possibility to support impersonation for tiller via the x-forwarded-user and/or x-forwarded-groups.

I had wanted tiller --experimental-impersonate-user --experimental-rbac-proxy to pass the value in x-forwarded-user to helm's local release module so that the release module could configure k8s api client to use an ImpersonateConfig which includes the passed username.

This results in following changes to current code-base. Would it be an acceptable change to helm/tiller?

  • Add the flag to Tiller for enabling the impersonation feature.
  • Add fields for impersonation(user, groups) to Release
    • ReleaseServer receives parameters for a specific release via Release.
    • Unfortunately, it doesn't allow including arbitrary key-value pairs which may include the value of x-forwarded-(user|groups) so we need to add the fields to it.
  • Improve ReleaseServer to pass an one-time k8s client-go instance which has specific impersonation config
    • ReleaseServer as of today has a single env which contains a k8s api client used from everywhere in tiller.
    • We need a different, separate k8s api client with appropriate ImpersonateConfig per the authenticated user who requested the operation on a release

TBD;

  • How should we support RemoteReleaseModules?
@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Feb 15, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@whereisaaron

This comment has been minimized.

Copy link
Contributor

whereisaaron commented Feb 17, 2018

/remove-lifecycle stale

This is still an important feature that should be considered for 3.x if an implementation can be identified. It would be good to get feedback on @mumoshu's excellent work on this.

@whereisaaron

This comment has been minimized.

Copy link
Contributor

whereisaaron commented Feb 17, 2018

/remove-lifecycle rotten

@docwhat

This comment has been minimized.

Copy link

docwhat commented Apr 4, 2018

Is there something I could do to make this happen?

@bacongobbler

This comment has been minimized.

Copy link
Member

bacongobbler commented Apr 4, 2018

@docwhat there has been discussion in the community around this topic, and the general consensus is to close this in favour of the Helm 3 proposal around its Security Considerations. We plan on moving Helm towards a client-only model, which means that a helm install acts on behalf of the user that invoked helm rather than having to impersonate the user when tiller is deploying the chart. This model is identical to kubectl's current security model so it makes for a better fit overall in the ecosystem. No extra work to provide user impersonation would be required.

Given that the general consensus is to move forward with the proposal in https://github.com/kubernetes-helm/community, I feel this is safe to close out this one as superseded by the Helm 3 proposal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment