Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Kubernetes Namespaces in Policies #2921

Closed
jakubdyszkiewicz opened this issue Oct 7, 2021 · 12 comments
Closed

Support Kubernetes Namespaces in Policies #2921

jakubdyszkiewicz opened this issue Oct 7, 2021 · 12 comments
Labels
kind/design Design doc or related triage/accepted The issue was reviewed and is complete enough to start working on it

Comments

@jakubdyszkiewicz
Copy link
Contributor

jakubdyszkiewicz commented Oct 7, 2021

Problem

Kuma was built with an assumption that it spawns across many workloads regardless of a Kubernetes namespace. For example, workloads across two different namespaces can be placed in one Mesh. For this reason and many more, we decided that there is no good way to support Kubernetes Namespaces and respecting Universal at the same time.

The policies in the initial version of Kuma were Namespaced scoped. For example, TrafficRoute could be placed in namespace team-a, but the fact that it was placed in team-a namespace meant nothing because when we generate Envoy config we took TrafficRoutes from all namespaces.

Eventually, we figured out that it is confusing for users therefore we changed our policy scope to Global.

The problem is that users are familiar with namespace separation and it would be great to support such a concept.

Solution

Incoming changes in policies described here bring more clear rules of matching which opens new possibilities for solving this problem.

Step 1 - add canonical k8s.kuma.io/namespace tag

Done in #3367

The first step is to introduce a new canonical k8s.kuma.io/namespace tag. This tag will be automatically added to the inbounds of Dataplane objects that are generated in Kubernetes. Example

apiVersion: kuma.io/v1alpha1
kind: Dataplane
mesh: default
metadata:
  name: pod-1
  namespace: team-a
spec:
  networking:
    address: 192.168.0.1
    inbound:
      - port: 8080
        tags:
          k8s.kuma.io/namespace: team-a # <- tag that is automatically added
          kuma.io/service: example_demo_svc_80

Step 2 - restore Namespace scope of policies

Related #1366

All the policies can again be placed in the namespace. You could place TrafficRoute named route-1 in namespace team-a. This will be converted in our core model to name team-a.route-1 just like Dataplane objects are at this moment (to support the same names in different namespaces). However...

Step 3 - introduce kuma-global-policies namespace

Introduce an arbitrary namespace that is configurable in the CP config that holds resources not bound to a namespace.
For example, TrafficRoute named route-2 in kuma-global-policies will be named just route-2 in our core model.

This could be just kuma-system namespace.

This will also be a namespace that we will use to sync resources from Global. We should sync them to this namespace.

The side effect of this change is that most likely we would enable the use case of having both Global and Zone CP in one Kubernetes cluster.

Step 4 - take into account the namespace when converting policies to our core model

Here is where interesting things happen. Since we want to change policy matching, we now know that we ALWAYS apply policy on Envoy selected in source in Connection Policy (like Timeout) and selectors in Dataplane Policy (like ProxyTemplate).

When converting policy from Kubernetes to our core model, if the policy is placed in the namespace, we automatically add a tag to selectors, so connection policy like this this

apiVersion: kuma.io/v1alpha1
kind: TrafficRoute
mesh: default
metadata:
  name: web-to-backend
  namespace: team-a
spec:
  sources:
    - match:
        kuma.io/service: web
  destinations:
    - match:
        kuma.io/service: backend
  conf:
    destination:
      kuma.io/service: backend
      version: 1

becomes

type: TrafficRoute
name: team-a.web-to-backend
mesh: default
sources:
  - match:
      kuma.io/service: web
      k8s.kuma.io/namespace: team-a # <- automatically added when converting resource to core model
destinations:
  - match:
      kuma.io/service: backend
conf:
  destination:
    kuma.io/service: backend
    version: 1

and Dataplane Policy defined like this

apiVersion: kuma.io/v1alpha1
kind: ProxyTemplate
mesh: default
metadata:
  name: custom-template-1
  namespace: team-a
spec:
  selectors:
    - match:
        kuma.io/service: '*'
  conf: ...

becomes

type: ProxyTemplate
mesh: default
name: team-a.custom-template-1
selectors:
  - match:
      kuma.io/service: '*'
      k8s.kuma.io/namespace: team-a # <- automatically added when converting resource to core model
conf: ...

Policies defined in kuma-global-policies won't have this tag automatically added.

This has a nice implication that owners of namespace team-a can only define policies that affects data plane proxies in namespace team-a which is what Kubernetes users are used to.

From the implementation perspective, retrieving resources and matching stays exactly the same. Internally we don't need to take into account namespaces as a special case since it's just another tag.

@jpeach
Copy link
Contributor

jpeach commented Oct 7, 2021

The problem is that users are familiar with namespace separation and it would be great to support such a concept.

What problems would users be able to solve with namespace support that are difficult to solve now?

@jakubdyszkiewicz
Copy link
Contributor Author

The main reasons for introducing it would be

  1. native Kubernetes Access Control. If you already have a system in place to restrict access for namespaces, you can leverage it. Of course, this is assuming single cluster.
  2. more predictable behavior on Kubernetes. Kubernetes users are used to place resources in namespace and expect that those resources will work only on the given namespace.

@jpeach
Copy link
Contributor

jpeach commented Oct 17, 2021

(1) makes sense for simple deployments, though if Kuma has a parallel RBAC system, that calculus might not make sense any more
(2) Yes and no. There are cases where same-namespace applicability for APIs makes sense, but others where users don't this.. I guess I'd be interested in specifically how people might be able to use this.

For (2), I think the idea of being able to delegate policy control to a service owner (defined as an actor with permission to deploy in a namespace) is pretty attractive, which I think is the original goal of this proposal 👍

This has a nice implication that owners of namespace team-a can only define policies that affects data plane proxies in namespace team-a which is what Kubernetes users are used to.

cert-manager APIs might be an interesting point of comparison since that allow both cluster-scoped and namespace-scoped uses (i.e. Issuer, ClusterIssuer, etc).

@jakubdyszkiewicz
Copy link
Contributor Author

  1. Then it's your choice. If we had RBAC system, then you can
    a) always use kuma-global-policies and rely on our RBAC
    b) use namespaces and not use our RBAC
    c) use our RBAC and namespaces at the same time. I don't see how calculus may not make sense. We are just adding one tag to either sources or selectors.

cert-manager APIs might be an interesting point of comparison since that allow both cluster-scoped and namespace-scoped uses (i.e. Issuer, ClusterIssuer, etc).

yeah, I saw this pattern in Kong Ingress Controller also. It might be ok with when you have a couple of CRDs, but duplicating 10? 15? CRDs with Cluster prefix may not be the best idea 🤔

@github-actions github-actions bot added the triage/stale Inactive for some time. It will be triaged again label Nov 22, 2021
@github-actions
Copy link
Contributor

This issue was inactive for 30 days it will be reviewed in the next triage meeting and might be closed.
If you think this issue is still relevant please comment on it promptly or attend the next triage meeting.

@lahabana lahabana added kind/design Design doc or related and removed design-proposal labels Nov 22, 2021
@jpeach jpeach removed the triage/stale Inactive for some time. It will be triaged again label Nov 22, 2021
@lahabana lahabana added the triage/pending This issue will be looked at on the next triage meeting label Nov 29, 2021
@github-actions
Copy link
Contributor

This issue was inactive for 30 days it will be reviewed in the next triage meeting and might be closed.
If you think this issue is still relevant please comment on it promptly or attend the next triage meeting.

@github-actions github-actions bot added the triage/stale Inactive for some time. It will be triaged again label Jan 14, 2022
@lahabana lahabana removed the triage/pending This issue will be looked at on the next triage meeting label Jan 17, 2022
@lahabana lahabana added triage/pending This issue will be looked at on the next triage meeting and removed triage/stale Inactive for some time. It will be triaged again labels May 30, 2022
@github-actions github-actions bot added the triage/stale Inactive for some time. It will be triaged again label Jun 30, 2022
@github-actions
Copy link
Contributor

This issue was inactive for 30 days it will be reviewed in the next triage meeting and might be closed.
If you think this issue is still relevant please comment on it promptly or attend the next triage meeting.

@lahabana lahabana removed the triage/stale Inactive for some time. It will be triaged again label Jul 5, 2022
@jakubdyszkiewicz
Copy link
Contributor Author

Triage: we are making progress, we are on step 2. with the new policies

@jakubdyszkiewicz jakubdyszkiewicz added triage/accepted The issue was reviewed and is complete enough to start working on it and removed triage/pending This issue will be looked at on the next triage meeting labels Jul 25, 2022
@github-actions github-actions bot added the triage/stale Inactive for some time. It will be triaged again label Oct 24, 2022
@github-actions
Copy link
Contributor

This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed.
If you think this issue is still relevant, please comment on it or attend the next triage meeting.

@lukidzi lukidzi removed the triage/stale Inactive for some time. It will be triaged again label Oct 31, 2022
@lahabana
Copy link
Contributor

Would be interesting to see what this would love like with targetRef policies

@github-actions
Copy link
Contributor

This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed.
If you think this issue is still relevant, please comment on it or attend the next triage meeting.

@github-actions github-actions bot added the triage/stale Inactive for some time. It will be triaged again label Apr 25, 2023
@lahabana lahabana removed the triage/stale Inactive for some time. It will be triaged again label Apr 25, 2023
@lahabana
Copy link
Contributor

This has been done now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/design Design doc or related triage/accepted The issue was reviewed and is complete enough to start working on it
Projects
None yet
Development

No branches or pull requests

4 participants