Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Adding DNS network policies #2423

Merged

Conversation

lionelvillard
Copy link
Contributor

@lionelvillard lionelvillard commented Nov 28, 2022

Summary

Restrict access to DNS pods to workspace associated to them. Also, make sure DNS pods have only access to CoreDNS pods.

This is enforced by create this networking policy, one per workspace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: Name
  namespace: Namespace
spec:
  podSelector:
    matchLabels:
      app: Name
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              internal.workload.kcp.io/cluster: Cluster # workspace key
      ports:
        - protocol: TCP
          port: 5353
        - protocol: UDP
          port: 5353
  egress:
    # Only give access to coredns in kube-system
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
        - podSelector:
            matchLabels:
              k8s-app: kube-dns
      ports:
        - protocol: TCP
          port: 53
        - protocol: UDP
          port: 53
     # Give access to the API server to watch its associated configmap
    - to:
# one ipBlock per IP (dynamically filled)
      - ipBlock:
          cidr: APIServerIP/32
# one ports per endpoint port (dynamically filled)
      ports:
        - protocol: TCP
           port: 6443

Related issue(s)

Fix #1988

Related issue for cleaning up DNS-related resources: kcp-dev/contrib-tmc#80

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 28, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 28, 2022

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@lionelvillard lionelvillard marked this pull request as ready for review November 29, 2022 13:34
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 29, 2022
@davidfestal
Copy link
Member

Partial fix for #1988

Could you provide more details about why it is a partial fix. What is implemented and what is not ?

@davidfestal davidfestal added the area/transparent-multi-cluster Related to scheduling of workloads into pclusters. label Dec 2, 2022
@lionelvillard lionelvillard force-pushed the dns-network-policies branch 4 times, most recently from a4f5b63 to 75610fc Compare January 11, 2023 17:49
@lionelvillard
Copy link
Contributor Author

not sure if it's a flake:

oroutine 26251 [running]:
        k8s.io/apimachinery/pkg/util/runtime.logPanic({0x2c1d260?, 0x38ec8d0})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/runtime/runtime.go:75 +0x99
        k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00e49b990?})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/runtime/runtime.go:49 +0x75
        panic({0x2c1d260, 0x38ec8d0})
        	/usr/local/go/src/runtime/panic.go:884 +0x212
        reflect.Value.Index({0x2b87f20?, 0xc004693a40?, 0xa?}, 0x2cf0540?)
        	/usr/local/go/src/reflect/value.go:1412 +0x16d
        k8s.io/apimachinery/pkg/runtime.sliceToUnstructured({0x2b8bfa0?, 0xc00d680898?, 0xc008306fd8?}, {0x2cf0540?, 0xc003c986d0?, 0x2dd8d60?})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/runtime/converter.go:763 +0x785
        k8s.io/apimachinery/pkg/runtime.toUnstructured({0x2b8bfa0?, 0xc00d680898?, 0x2?}, {0x2cf0540?, 0xc003c986d0?, 0x98?})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/runtime/converter.go:688 +0x6cc
        k8s.io/apimachinery/pkg/runtime.structToUnstructured({0x30226e0?, 0xc00d680780?, 0xc00da9a5a0?}, {0x2d4c880?, 0xc00822af38?, 0x2b52b20?})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/runtime/converter.go:843 +0x905
        k8s.io/apimachinery/pkg/runtime.toUnstructured({0x30226e0?, 0xc00d680780?, 0xc00d680780?}, {0x2d4c880?, 0xc00822af38?, 0xc00da9a570?})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/runtime/converter.go:692 +0x7f4
        k8s.io/apimachinery/pkg/runtime.(*unstructuredConverter).ToUnstructured(0x54536d0, {0x31fc540?, 0xc00d680780})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/runtime/converter.go:586 +0x3ba
        github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.toUnstructured({0x31fc540, 0xc00d680780})
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_reconcile_unstructured.go:213 +0x6c
        github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.(*controller).reconcileObject(0xc000f5d300, {0x3924818, 0xc00da9a2a0}, {0xc00e4860a5?, 0x1?}, {{0x326d8a3, 0x19}, {0x3238f30, 0x2}, {0x3249a2d, ...}}, ...)
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_reconcile.go:177 +0x565
        github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.(*controller).reconcile(0xc000f5d300, {0x3924818, 0xc00da9a2a0}, {0xc00e486070, 0x70})
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_reconcile.go:97 +0xe1a
        github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.(*controller).processNextWorkItem(0xc000f5d300, {0x3924818, 0xc007d03320})
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_controller.go:193 +0x2b1
        github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.(*controller).startWorker(0xc00980ec00?, {0x3924818, 0xc007d03320})
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_controller.go:180 +0x39
        k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:188 +0x25
        k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x2?)
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:155 +0x3e
        k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x38f5900, 0xc00980ec00}, 0x1, 0xc005b2d2c0)
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:156 +0xb6
        k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0010147e0?, 0x3b9aca00, 0x0, 0x0?, 0xc002222000?)
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:133 +0x89
        k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x3924818, 0xc007d03320}, 0xc005af30e0, 0x1a7e305?, 0xc001bcce01?, 0x0?)
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:188 +0x99
        k8s.io/apimachinery/pkg/util/wait.UntilWithContext({0x3924818?, 0xc007d03320?}, 0xc0010147e0?, 0x0?)
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:99 +0x2b
        created by github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.(*controller).Start
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_controller.go:174 +0x345
        panic: reflect: slice index out of range [recovered]
        	panic: reflect: slice index out of range
        
        goroutine 26251 [running]:
        k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00e49b990?})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/runtime/runtime.go:56 +0xd7
        panic({0x2c1d260, 0x38ec8d0})
        	/usr/local/go/src/runtime/panic.go:884 +0x212
        reflect.Value.Index({0x2b87f20?, 0xc004693a40?, 0xa?}, 0x2cf0540?)
        	/usr/local/go/src/reflect/value.go:1412 +0x16d
        k8s.io/apimachinery/pkg/runtime.sliceToUnstructured({0x2b8bfa0?, 0xc00d680898?, 0xc008306fd8?}, {0x2cf0540?, 0xc003c986d0?, 0x2dd8d60?})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/runtime/converter.go:763 +0x785
        k8s.io/apimachinery/pkg/runtime.toUnstructured({0x2b8bfa0?, 0xc00d680898?, 0x2?}, {0x2cf0540?, 0xc003c986d0?, 0x98?})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/runtime/converter.go:688 +0x6cc
        k8s.io/apimachinery/pkg/runtime.structToUnstructured({0x30226e0?, 0xc00d680780?, 0xc00da9a5a0?}, {0x2d4c880?, 0xc00822af38?, 0x2b52b20?})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/runtime/converter.go:843 +0x905
        k8s.io/apimachinery/pkg/runtime.toUnstructured({0x30226e0?, 0xc00d680780?, 0xc00d680780?}, {0x2d4c880?, 0xc00822af38?, 0xc00da9a570?})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/runtime/converter.go:692 +0x7f4
        k8s.io/apimachinery/pkg/runtime.(*unstructuredConverter).ToUnstructured(0x54536d0, {0x31fc540?, 0xc00d680780})
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/runtime/converter.go:586 +0x3ba
        github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.toUnstructured({0x31fc540, 0xc00d680780})
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_reconcile_unstructured.go:213 +0x6c
        github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.(*controller).reconcileObject(0xc000f5d300, {0x3924818, 0xc00da9a2a0}, {0xc00e4860a5?, 0x1?}, {{0x326d8a3, 0x19}, {0x3238f30, 0x2}, {0x3249a2d, ...}}, ...)
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_reconcile.go:177 +0x565
        github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.(*controller).reconcile(0xc000f5d300, {0x3924818, 0xc00da9a2a0}, {0xc00e486070, 0x70})
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_reconcile.go:97 +0xe1a
        github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.(*controller).processNextWorkItem(0xc000f5d300, {0x3924818, 0xc007d03320})
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_controller.go:193 +0x2b1
        github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.(*controller).startWorker(0xc00980ec00?, {0x3924818, 0xc007d03320})
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_controller.go:180 +0x39
        k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:188 +0x25
        k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x2?)
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:155 +0x3e
        k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x38f5900, 0xc00980ec00}, 0x1, 0xc005b2d2c0)
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:156 +0xb6
        k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0010147e0?, 0x3b9aca00, 0x0, 0x0?, 0xc002222000?)
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:133 +0x89
        k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x3924818, 0xc007d03320}, 0xc005af30e0, 0x1a7e305?, 0xc001bcce01?, 0x0?)
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:188 +0x99
        k8s.io/apimachinery/pkg/util/wait.UntilWithContext({0x3924818?, 0xc007d03320?}, 0xc0010147e0?, 0x0?)
        	/go/pkg/mod/github.com/kcp-dev/kubernetes/staging/src/k8s.io/apimachinery@v0.0.0-20230109113100-c493866a854f/pkg/util/wait/wait.go:99 +0x2b
        created by github.com/kcp-dev/kcp/pkg/reconciler/cache/replication.(*controller).Start
        	/go/src/github.com/kcp-dev/kcp/pkg/reconciler/cache/replication/replication_controller.go:174 +0x345

@lionelvillard
Copy link
Contributor Author

/test e2e
/test e2e-multiple-runs

1 similar comment
@lionelvillard
Copy link
Contributor Author

/test e2e
/test e2e-multiple-runs

@lionelvillard lionelvillard changed the title ✨ Adding DNS network policies - Part 1 ✨ Adding DNS network policies Jan 12, 2023
@lionelvillard
Copy link
Contributor Author

/hold

I'm adding e2e tests.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 12, 2023
@lionelvillard lionelvillard force-pushed the dns-network-policies branch 7 times, most recently from d52f788 to 527ee46 Compare January 13, 2023 18:32
@lionelvillard
Copy link
Contributor Author

/unhold

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 13, 2023
@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 8, 2023
@davidfestal
Copy link
Member

@sttts Do you want to approve ?

@sttts
Copy link
Member

sttts commented Feb 8, 2023

/approve

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 8, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sttts

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 8, 2023
@davidfestal
Copy link
Member

/lgtm

@davidfestal
Copy link
Member

/restest

@davidfestal
Copy link
Member

/retest

1 similar comment
@davidfestal
Copy link
Member

/retest

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Feb 8, 2023
@davidfestal
Copy link
Member

/lgtm

1 similar comment
@davidfestal
Copy link
Member

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 9, 2023
@lionelvillard
Copy link
Contributor Author

/test e2e-sharded

@openshift-merge-robot openshift-merge-robot merged commit ee81cfe into kcp-dev:main Feb 9, 2023
@kcp-ci-bot kcp-ci-bot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Nov 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/transparent-multi-cluster Related to scheduling of workloads into pclusters. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Protect the access to the CoreDNS pods installed in the physical cluster through NetworkPolicies
6 participants