Skip to content

Conversation

@mikedanese
Copy link
Member

@kubernetes/sig-auth-api-reviews @kubernetes/sig-auth-feature-requests

@k8s-ci-robot k8s-ci-robot added sig/auth Categorizes an issue or PR as relevant to SIG Auth. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jan 13, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: mikedanese
We suggest the following additional approver: ericchiang

Assign the PR to them by writing /assign @ericchiang in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@mikedanese mikedanese force-pushed the validate-annotations branch 3 times, most recently from 3508b04 to 97ac230 Compare January 13, 2018 23:18
This constraint allows a security auditor to audit identity access with only the
delegation policy. Identity integrations can work around this by implementing
custom admission controllers but the issue is common and general enough to
warrent a consisntent API (and thus a requires a solution in core).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consistent

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and thus requires


# Proposal

We can add a "validatedAnnotations" field on PodSpec and validate these
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree the most compelling use case is on the PodSpec. Are there any use cases for this beyond things that end up creating Pods? Could this be useful alongside the standard annotations in ObjectMeta?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure that would solve this use case. Annotations aren't passed to sub-objects.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. There may be some CRDs that would benefit from this as well (e.g. Istio service authn/authz policies that need to be bound to the same service identity).

reading from the Kubernetes API.

By serving an authorizer that translates the SubjectAccessReview into a
delegation check and only injection credentials into pods conditional on the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"only injects credentials"

@jbeda
Copy link
Contributor

jbeda commented Jan 26, 2018

We talked about this at length at the workload identity WG meeting. I have concerns here trying to lock this down at a sub-namespace level in the face of the proliferation of controllers and the confused deputy issues those introduce.

@nckturner
Copy link
Contributor

We would love to have the ability to reason about who has access to service accounts/credentials without having to understand the implicit access given by exec, create pod, updating image, etc. So if its technically feasible, it would be great to see something like this.

@jboeuf
Copy link

jboeuf commented Jan 27, 2018 via email

@jbeda
Copy link
Contributor

jbeda commented Jan 28, 2018

@jboeuf @nckturner -- I understand what you'd like to have happen here. But I'm not convinced that this is advisable or even possible with the design of Kubernetes and the way the community is going.

Specifically, as more and more people write controllers, we introduce a whole pile of confused deputy issues. We can't view the set of controllers in kubernetes as a closed set.

Right now for all intents and purposes, from an API access point of view, the namespace is our only real hard security boundary. Any number of controllers have "root" for a namespace (and even across the cluster!) and there is an implicit assumption that anyone that can write a CRD/resource in a namespace for those controllers has permission to use anything in the namespace.

If you watched the call, you'll see that I drew the comparison to Google MapReduce and LOAS/MDB groups. MapReduce itself is a "controller" and will spawn new Borg Jobs. When it does so it happens in the context of the owning MDB group. If the original user that launched the MR job is propogated it is a best effort type of thing and is used for debugging/auditing and not authz decisions. I haven't been at Google for 2+ years. Has the MapReduce/Borg/MDB/LOAS story moved forward in that time? What about shared MapReduce worker pools?

The closest analogy to a Google MDB group is a k8s namespace. While we do have sub-identities those are really only useful to further scope permissions for workloads. There is very little anyone can do in a k8s without "root" in a namespace.

And k8s is not going to be the last third party "opaque" system that people will want to trace permissions through. There are all sorts of systems that will store things like GCP SA JSON Keys. Those systems will have their own authz systems that may or may not relate back to Google identities.

Another option that I brought up in the meeting is to have the k8s authz systems support an optional "backtrace" that can ask what identities have access to write specific objects into a namespace.

@jbeda
Copy link
Contributor

jbeda commented Jan 28, 2018

Also -- this proposal is wide scoped enough and will clearly impact across SIGs (SIG-Apps for workload controllers, API machinery for propogation/tracing, SIG-Architecture for new API values, others?) that it might make sense making a KEP out of this and getting wide review.

@spikecurtis
Copy link

The way I've been thinking about this stuff is that there is an easy way, and a hard way.

The easy way is to just have the namespace be the security boundary.

The hard way is trying to control access at a granularity smaller than the namespace, like access to particular pod execution environments. I think @jbeda is right that this is a lot of work, and touches many aspects of Kubernetes.

A really relevant question is then, does our user community really need this functionality? How do we know? @mikedanese (or others), can you comment on why namespace granularity is insufficient for the use cases you're imagining? Can you comment on how we know end users care about it?

@nckturner would being able to reason about it at the namespace level be sufficient for your purposes?

@mikedanese
Copy link
Member Author

mikedanese commented Jan 31, 2018

The problem is that there are too many ways to access credentials injected by an external system. I want a mechanism to allow an authorization system to be able to apply policy on "A accessing credentials of B" at a single choke point, and not have to reason about the growing list of (resource,verbs) that allow this today. The desired property we want to enforce:

An actor should be able to access credentials of an identity only if the actor
has explicit authority to act as that identity.

The sub-namespace division is an attribute of the specific proposed solution which I don't want to rat hole on. e.g. this might also be solved equivalently with "validatedNamespaceAnnotations" if that is preferable.

Questions:

  • Is this a desirable property to be able to enforce? Is it generally useful to add a mechanism to enforce this?
  • If this property is enforced, can we piggy back on it to drive access to credentials of external identities?
  • What's the best way to enforce this?

@spikecurtis
Copy link

The "namespace as a security boundary" issue runs pretty deep, and is a shorthand for many issues that prevent things from working with the property you would like to enforce. I don't think you'd get any serious argument that the property you describe is desirable, in an absolute sense, but we have to ask whether it's worth the cost to implement.

Thinking about how Service Account credentials work today helps me to reason about the issues.

Nominally, the API actions that give access to Service Account credentials are finite:

  • Create/update a pod
  • Exec into a pod

But, users have a myriad of ways to effectively impersonate other machine identities, such as configuring controllers to act on their behalf. The problem of attributing those actions back to a human user is very hard!

Right now the advice we're forced to give is basically, unless the user has read-only access to the namespace, you should assume they have access to all credentials. It's pragmatic, but not 100% accurate (in as much as there exist some privileges you can grant a user that are not read-only, but still don't allow them access to the credentials).

The easy way is to maintain that status quo, but for external credentials. Tell people, unless you've locked a user down to read-only (or less) access to a namespace, you should assume they can get any external credentials in that namespace.

Is that good enough for our users?

@nckturner
Copy link
Contributor

"Namespace as a security boundary" seems to contradict the 1.9 docs about namespaces, which state:

Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the features they provide.

I.e., there is no mention of a security boundary. Extrapolating: the guidance, considering the current state of things, should be to have a namespace for every credentialed identity?

@spikecurtis @jbeda I think if implementation is too difficult at a sub-namespace granularity, then perhaps like @mikedanese said its still useful at the namespace level. What we would care about is giving guidance to our customers about, e.g. storing IAM credentials as kubernetes secrets to be used by in-cluster workloads, and being able to give some guarantee about who can access them (beyond just knowing who has access to the namespace). I.e. given a namespace, and a secret in that namespace, only users with explicit access to the secret would be able to do things like exec into pods in the namespace. Whereas right now, a user's access to the namespace dictates the access to the secret. I see value in it, but I also agree with @spikecurtis that we should speak to more users.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 3, 2018
@pdhung
Copy link

pdhung commented Jun 26, 2018

@spikecurtis it is difficult to say if this is enough for our users or not, as we have no way of knowing the complete set of our customers, and the customers will have different needs anyway.

For us, as an established corporate trying out cool, new things in OOS world, the ability to sharing some resource but with fine grain control of the resource is something we need.

An example from our use case is that, we have multiple customers, and each has their own namespace running a set of software and exposing them by ingress rules. We have a wildcard certificate as TLS secret, and we want this TLS to be used, but not readable by the customers.

The current situation is:

  1. Secret cannot be referred from another namespace, which effectively makes us to deploy the secret in the customer's namespace.
  2. But then the customers definitely can create a pod, and use that pod to read our TLS secret.

One solution is to issue each customer their own certificate, so that they have no shared secret. Due to limitation of our organization and restriction of law, we have to issue certificates using our own CA and we cannot issue the certificate less than a week. And the main reason we want to try out Kubernetes is that, we can speed up our agility. So this is somehow ironic for our use case.

Are we representative of the majority of k8s users? Definitely not.
But I think our use case is very common to multiple organizations as well, especially non-tech companies.

@jdumars
Copy link
Contributor

jdumars commented Jun 26, 2018

I talked to @mikedanese about this proposal, and we're going to close it. The hope is once there's bandwidth to iterate on a solution, we'll open a KEP for this work and provide more visibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/auth Categorizes an issue or PR as relevant to SIG Auth. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.