Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not enable static pods on non-control plane nodes by default #1541

Closed
joshrosso opened this issue Apr 27, 2019 · 17 comments
Assignees
Milestone

Comments

@joshrosso
Copy link

@joshrosso joshrosso commented Apr 27, 2019

FEATURE REQUEST

Non-control plane nodes run their kubelets with static pods enabled. I think this is unnecessary and may even provide a security risk. As a process or workload that gains hostPath access can arbitrarily place manifests in /etc/kubernetes/manifests and run privileged workloads.

My suggestion is to not include the following in /var/lib/kubelet/config.yaml.

staticPodPath: /etc/kubernetes/manifests

This applies to all nodes that are not joined as control plane nodes.

Let me know if this makes sense. If it does, I would be happy to make the change.

Versions

kubeadm version:

kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version:

    Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
    
@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Apr 27, 2019

static pods are still considered as an alternative way to run a workload on a node.
the user might decide to not manage it via the API server and scheduler so this does have some non-common benefits.

staticPodPath: /etc/kubernetes/manifests

if we remove this capability now, we might be breaking existing workload scenarios, even if they might be rare.

As a process or workload that gains hostPath access can arbitrarily place manifests in /etc/kubernetes/manifests and run privileged workloads.

such a workload would need to be privileged to write to the default manifest path. so it feels like this a decision in the hands of the operator - i.e. don't give questionable workloads a privileged run?

said workload writing manifest in the default path can indeed result in pods being spawned, but i think i would consider this a node-level compromise and the only attack that can reach the API server would be a DDoS in terms of mirror pods.

but let's get more opinions on this.

@kubernetes/sig-cluster-lifecycle-pr-reviews
/assign @mauilion @timothysc
for evaluation.

and thanks for bringing this up @joshrosso

@pytimer

This comment has been minimized.

Copy link

@pytimer pytimer commented Apr 27, 2019

We run ovs-agent by static pod, so the kubelet default static pod path is useful to me.If we remove this capability, i must change other way to deploy ovs-agent.

If we remove this capability, it should support other way instead of static pod.

@joshrosso

This comment has been minimized.

Copy link
Author

@joshrosso joshrosso commented Apr 27, 2019

@pytimer: for my learning, why run ovs-agent as a static pod rather than a daemonset?

@neolit123:

such a workload would need to be privileged to write to the default manifest path. so it feels like this a decision in the hands of the operator - i.e. don't give questionable workloads a privileged run? ..... but i think i would consider this a node-level compromise

I believe the exploitation scenario would only require the following.

  1. Using hostPaths was not restricted on the workload.
  2. The workload's process is allowed to run as root inside the container.

Here is my testing to prove this out. If I am misinterpreting the results, let me know.

I have created a default pod security policy to replicate this.

NAME          PRIV    CAPS               SELINUX    RUNASUSER   FSGROUP     SUPGROUP    READONLYROOTFS   VOLUMES
default       false                      RunAsAny   RunAsAny    MustRunAs   MustRunAs   false            configMap,emptyDir,projected,secret,hostPath,downwardAPI,persistentVolumeClaim

Note that spec.priveleged: false but I have "accidentally" allowed containers to run their processes as root and leverage hostPath.

I have deployed the following workload.

apiVersion: 
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: org-1
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        securityContext:
          runAsUser: 0
        image: joshrosso/test:1.0
        command:
          - sleep
          - "600000"
        ports:
        - containerPort: 80
        volumeMounts:
          - mountPath: /home/aaaa
            name: test-volume
      volumes:
      - name: test-volume
        hostPath:
          # directory location on host
          path: /etc/kubernetes/manifests
          # this field is optional
          type: DirectoryOrCreate
kubectl -n org-1 get po nginx-deployment-6b89bddbbf-67cp9 -o yaml | grep -i psp
kubernetes.io/psp: default

Due to the mapping above /home/aaaa has been chown'd to root:root, but, since I'm running as root in the container:

I can now exec into the container and start writing to /home/aaaa (mapped to /etc/kubernetes/manifests).

My thought process as to why static pods should not be enabled by default are as follows.

  1. Don't run something that's generally not used.
  2. Static pods (in my experience) should usually be daemonsets. Turning it on by default can encourage bad behavior.
  3. The above exploit
    1. albeit very specific and requires bad configuration.

However, your point about it being a breaking change is very fair. In the field, I'll probably just have to explicitly disable this in future joins. Thanks for the input.

@mauilion

This comment has been minimized.

Copy link
Contributor

@mauilion mauilion commented Apr 28, 2019

I also strongly believe that static pod manifests should not be on by defaul on non control plane nodes. This is more a "security best practice" in my mind.

Josh brings up a good point above around the behavior of things like PodSecurityPolicy and static pods.

With the ability to mount hostpath on the workers (which currently can only be limited by object quota and pod security policy) A user can mount /etc/kubernetes/manifests and place a static pod manifest there.

Static pods are initialized and managed by the kubelet directly. Without being constrained by admission control of any sort.

As an operator of a psp enabled cluster you can defined psp that is bound to the system:nodes group the static pods would still be created and managed by the kubelet. Even if the psp allowed to the node does not allow admission for the pod. In this case the user will "assume" that the static pod was not created as it is not registered with the cluster (refused admission by the psp mechanism)
The pod will be started and managed by the kubelet regardless.

That we enable static pods by default on all nodes is probably not in keeping with kubeadm creating a cluster configured with best practices in mind.

@pytimer

This comment has been minimized.

Copy link

@pytimer pytimer commented Apr 28, 2019

@joshrosso In my environment, every node has two network interface, one interface is internal ip and another is external ip, ovs will create a br-ex network interface and setting external ip on br-ex. Kubelet set --node-ip=<external-ip>, because if i use internal ip, i can't access service that running in cluster from external.

If i reboot this node when i use Daemonset, the ovs-agent pod can't running, because the external ip not bind the br-ex network interface, so kubelet can't connect the kube-apiserver, ovs-agent can't started by kubelet, so we use static pod to do it.

I don't know more information about ovs, so i don't know this way to running ovs is it correct?

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Apr 28, 2019

thanks for the comments. i will add an agenda item for the kubeadm office hours meeting on wednesday.

@timothysc timothysc added this to the v1.15 milestone Apr 29, 2019
@timothysc

This comment has been minimized.

Copy link
Member

@timothysc timothysc commented Apr 29, 2019

@mauilion @joshrosso - Can you outline the security issue here, b/c that location is locked down.

I could also see a number of initial boot-strapping conditions where static manifests could be very useful.

@joshrosso

This comment has been minimized.

Copy link
Author

@joshrosso joshrosso commented Apr 29, 2019

@timothysc : does the scenario at #1541 (comment) highlight it?

Can you elaborate on "locked down" ?

I could also see a number of initial boot-strapping conditions where static manifests could be very useful.

I'm interested in these. On the control-plane, prior to Kubernetes existing, i can see the value. But for workers, it is less clear to me.

Thanks in advanced.

@timothysc

This comment has been minimized.

Copy link
Member

@timothysc timothysc commented Apr 30, 2019

@joshrosso static manifests could do a bunch of initial node bootup conditions, data-integrity checks are a good example. But there are a bunch of initial conditions that can be mitigated via static manifests before the node re-joins the cluster on startup.

@PatrickLang

This comment has been minimized.

Copy link

@PatrickLang PatrickLang commented Apr 30, 2019

If someone can run with /etc/kubernetes mapped and with privilege, they could have also chosen /etc/defaults or /etc/rc*.d or /root/.bashrc too. All those are reasonable startup vectors.

I'm considering using static pods for IoT scenarios on Linux where I may not have direct connectivity to the apiserver. Instead of relying on docker restart-always or another systemd entry, I'd rather just use a static pod. I would run a ssh tunnel, openvpn, or something like that in the static pod.

@joshrosso

This comment has been minimized.

Copy link
Author

@joshrosso joshrosso commented May 1, 2019

Absolutely. The larger issue would be that one was able to create a hostPath and run as root inside their container.

But I think adding another start-up vector on worker nodes is still not a good idea.

Thanks for the data on static pod usage. In full transparency, I feel like these use cases are better served with config mgmt doing validation or systemd units managing lower level services. Re-using static pods there feels hackish and messy. Albeit more convenient than managing additional systemd units. Granted, that is just my view of the world and it may be flawed.

I think the crux here is whether this is a sensible default that embodies best practices. And balancing that with the potential to break compatibility.

I am happy to take on this ticket should we decide to do it. Completely understand if we do not.

@timothysc

This comment has been minimized.

Copy link
Member

@timothysc timothysc commented May 1, 2019

@joshrosso we talked about it today, and there always exists the option to disable via kubelet options override today, so folks who want to can disable.

We chatted about the possible security issues but we're not overly concerned b/c if a user has access to root level locations all bets are off.

I'm not opposed to updating FAQs and guidance but given that multiple folks use the facility today, I think changing the default could be disruptive. I'll leave this open for a while and see if we can collect more data.

@joshrosso

This comment has been minimized.

Copy link
Author

@joshrosso joshrosso commented May 1, 2019

@timothysc: sounds good. Thanks for the consideration and follow-up.

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented May 1, 2019

@timothysc
as promised during the meeting today, i checked the latest CIS report that @randomvariable shared with me and this is not there (yet).

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Jul 30, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Jul 30, 2019

/lifecycle frozen

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Aug 3, 2019

i'm going to close this ticket as the discussion was mostly resolved.
please re-open if needed.

I'm not opposed to updating FAQs and guidance but given that multiple folks use the facility today, I think changing the default could be disruptive. I'll leave this open for a while and see if we can collect more data.

my main concern as well. changing the default breaks existing setups.
once we do the security audit of kubeadm we can plan on writing a best practices doc, where disabling static pods on workers can be included.

@neolit123 neolit123 closed this Aug 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
8 participants
You can’t perform that action at this time.