-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Perform challenge callbacks into a node #15125
Conversation
/hold for discussion, I don't think we should merge this lightly :-) |
one notable thing: people can run pods in hostNetwork and then its possible to impersonate. But as I see this: why we should accept calls anymore from instances that are already members of kubernetes cluster? If we do not accept that, then its pretty easy to protect against normal pods. One thing that is coming to my mind: if node lifetime is enough long it might be that kubelet certificates are going to expire and re-register is needed (kops-controller should accept in that case?)? That could be done like using |
d98b755
to
f24fde4
Compare
So I think we need to do this PR and prevent the node from re-registering. We need this so that we can have higher-confidence when the node joins (particularly on clouds where we don't get strong node-attestation, for example DigitalOcean). And we want to prevent the node from re-registering to avoid attacks where a pod tries to impersonate a node. We do want to allow some re-registration (as you've pointed out, where the node reboots, or where the cert expires) ... I am thinking we want a machine-key or similar, but I think we've agreed that we can iterate on that! |
e87a61a
to
ee53335
Compare
/test pull-kops-e2e-cni-cilium-ipv6 |
/retest Going to look into each of the failures, but they all appear to be unrelated |
Test failures matched kubernetes/kubernetes#117363 , i.e. a data race in k8s |
In order to verify that the caller is running on the specified node, we source the expected IP address from the cloud, and require that the node set up a simple challenge/response server to answer requests. Because the challenge server runs on a port outside of the nodePort range, this also makes it harder for pods to impersonate their host nodes - though we do combine this with TPM and similar functionality where it is available.
DigitalOcean (and others) will follow shortly. Also create a method for CloudProvider, so that we are more ambivalent towards bootstrapping methods.
/retest |
I made the changes here to only run this on hetzner (and I'll rebase the digitalocean branch after this merges). |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hakman The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Thanks for reviewing @hakman /hold cancel |
/test pull-kops-e2e-cni-cilium (It was the openapi data race again) |
In order to verify that the caller is running on the specified node, we source the expected IP address from the cloud, and require that the node set up a simple challenge/response server to answer requests.
Because the challenge server runs on a port outside of the nodePort range, this also makes it harder for pods to impersonate their host nodes - though we do combine this with TPM and similar functionality where it is available.