New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not enable static pods on non-control plane nodes by default #1541
Comments
static pods are still considered as an alternative way to run a workload on a node.
if we remove this capability now, we might be breaking existing workload scenarios, even if they might be rare.
such a workload would need to be privileged to write to the default manifest path. so it feels like this a decision in the hands of the operator - i.e. don't give questionable workloads a privileged run? said workload writing manifest in the default path can indeed result in pods being spawned, but i think i would consider this a node-level compromise and the only attack that can reach the API server would be a DDoS in terms of mirror pods. but let's get more opinions on this. @kubernetes/sig-cluster-lifecycle-pr-reviews and thanks for bringing this up @joshrosso |
We run ovs-agent by static pod, so the kubelet default static pod path is useful to me.If we remove this capability, i must change other way to deploy ovs-agent. If we remove this capability, it should support other way instead of static pod. |
@pytimer: for my learning, why run ovs-agent as a static pod rather than a daemonset?
I believe the exploitation scenario would only require the following.
Here is my testing to prove this out. If I am misinterpreting the results, let me know. I have created a
Note that I have deployed the following workload. apiVersion:
kind: Deployment
metadata:
name: nginx-deployment
namespace: org-1
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
securityContext:
runAsUser: 0
image: joshrosso/test:1.0
command:
- sleep
- "600000"
ports:
- containerPort: 80
volumeMounts:
- mountPath: /home/aaaa
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /etc/kubernetes/manifests
# this field is optional
type: DirectoryOrCreate
Due to the mapping above I can now exec into the container and start writing to My thought process as to why static pods should not be enabled by default are as follows.
However, your point about it being a breaking change is very fair. In the field, I'll probably just have to explicitly disable this in future joins. Thanks for the input. |
I also strongly believe that static pod manifests should not be on by defaul on non control plane nodes. This is more a "security best practice" in my mind. Josh brings up a good point above around the behavior of things like PodSecurityPolicy and static pods. With the ability to mount hostpath on the workers (which currently can only be limited by object quota and pod security policy) A user can mount Static pods are initialized and managed by the kubelet directly. Without being constrained by admission control of any sort. As an operator of a psp enabled cluster you can defined psp that is bound to the That we enable static pods by default on all nodes is probably not in keeping with kubeadm creating a cluster configured with best practices in mind. |
@joshrosso In my environment, every node has two network interface, one interface is internal ip and another is external ip, ovs will create a If i reboot this node when i use I don't know more information about ovs, so i don't know this way to running ovs is it correct? |
thanks for the comments. i will add an agenda item for the kubeadm office hours meeting on wednesday. |
@mauilion @joshrosso - Can you outline the security issue here, b/c that location is locked down. I could also see a number of initial boot-strapping conditions where static manifests could be very useful. |
@timothysc : does the scenario at #1541 (comment) highlight it? Can you elaborate on "locked down" ?
I'm interested in these. On the control-plane, prior to Kubernetes existing, i can see the value. But for workers, it is less clear to me. Thanks in advanced. |
@joshrosso static manifests could do a bunch of initial node bootup conditions, data-integrity checks are a good example. But there are a bunch of initial conditions that can be mitigated via static manifests before the node re-joins the cluster on startup. |
If someone can run with I'm considering using static pods for IoT scenarios on Linux where I may not have direct connectivity to the apiserver. Instead of relying on docker restart-always or another systemd entry, I'd rather just use a static pod. I would run a ssh tunnel, openvpn, or something like that in the static pod. |
Absolutely. The larger issue would be that one was able to create a hostPath and run as root inside their container. But I think adding another start-up vector on worker nodes is still not a good idea. Thanks for the data on static pod usage. In full transparency, I feel like these use cases are better served with config mgmt doing validation or systemd units managing lower level services. Re-using static pods there feels hackish and messy. Albeit more convenient than managing additional systemd units. Granted, that is just my view of the world and it may be flawed. I think the crux here is whether this is a sensible default that embodies best practices. And balancing that with the potential to break compatibility. I am happy to take on this ticket should we decide to do it. Completely understand if we do not. |
@joshrosso we talked about it today, and there always exists the option to disable via kubelet options override today, so folks who want to can disable. We chatted about the possible security issues but we're not overly concerned b/c if a user has access to root level locations all bets are off. I'm not opposed to updating FAQs and guidance but given that multiple folks use the facility today, I think changing the default could be disruptive. I'll leave this open for a while and see if we can collect more data. |
@timothysc: sounds good. Thanks for the consideration and follow-up. |
@timothysc |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle frozen |
i'm going to close this ticket as the discussion was mostly resolved.
my main concern as well. changing the default breaks existing setups. |
FEATURE REQUEST
Non-control plane nodes run their kubelets with static pods enabled. I think this is unnecessary and may even provide a security risk. As a process or workload that gains
hostPath
access can arbitrarily place manifests in/etc/kubernetes/manifests
and run privileged workloads.My suggestion is to not include the following in
/var/lib/kubelet/config.yaml
.This applies to all nodes that are not joined as control plane nodes.
Let me know if this makes sense. If it does, I would be happy to make the change.
Versions
kubeadm version:
Environment:
Kubernetes version:
The text was updated successfully, but these errors were encountered: