-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubelet with anonymous-auth documentation is laking #3891
Comments
/cc @bradgeesaman What is the best practice security setup? |
@josselin-c What are the full command line options that your kubelet is running with? Here https://github.com/bgeesaman/kubernetes-the-hard-way/blob/0aaf79ec93356f3afee534d67e17acca273c5d25/docs/09-bootstrapping-kubernetes-workers.md is a working kubelet command set for a prior release of Kubernetes the Hard Way as a reference. |
Kubelet needs to use the |
So you should definitely have anonymousAuth=false; you can get up to a lot of mischief otherwise, assuming you are not blocking the local kubelet api port from your containers In kops, if you set |
Thanks for the pointers, I better understand what options I have to set now:
It doesn't seem possible to set |
correct ... |
@josselin-c Can you manually edit that setting for your kubelet on one worker node (SSH in, edit, restart kubelet) and see if that node still functions correctly (pods schedule, you can Over the next few weeks, I'll be looking into these specifics myself, but this was the process I took when submitting the PR I linked above to Kubernetes the Hard Way to validate the configuration. |
Okay, here is what I did:
Create the /var/lib/kubernetes/ca.pem file with the certificate-authority-data field of the /var/lib/kubelet/kubeconfig file: Made the /etc/sysconfig/kubelet imutable so it isn't replaced next boot: Then: reboot After that, the node is marked as
Maybe I set the wrong CA? /var/lib/kubernetes/ca.pem didn't exists before I created it. |
Ok, I spun up a 1.7.1 cluster with everything as defaults and hand-edited things until they worked. This is not a final solution per se, but rather a map on how to get to a potentially working destination. NOTE: Do not deploy these changes to a cluster you care about. I enable RBAC here, and I guarantee that I'm breaking other services via missed RBAC policies. On the workers, go into
On the master, edit
Run
Run
You should now be able to perform exec and log actions again. So, I know that isn't a "add this option to kops in a yaml and go" solution, but it does outline some of the work needed to make this function as intended. |
Issue #1231 is what is causing the additional RBAC items to be necessary (instead of following the naming convention that gets the permissions automatically applied) |
/area security |
@josselin-c this issue should be coverred by getting #1231 working in kops. Agreed? If so can we close this as a duplicate? |
Probably.
|
@josselin-c Clever! Are you using this successfully in a cluster running calico/weave/other? |
On clusters with calico/weave I'd look into NetworkPolicies, I had to use a DaemonSet because flannel doesn't support them. |
Ah, that makes sense. IME, Calico monitors “foreign” iptables rules and removes them automatically. In cases where networkpolicy doesn’t yet support egress filtering, rules like these can be a useful stopgap if they can be made to “stick”. |
I take that back. Kops 1.7.1 (k8s 1.7.10) with calico does not modify the This is the shortened output of a simple shell script that I run inside a pod to see what it can see/do:
Notice how # 7 is blocked but # 8 and # 9 still succeed. This is what the traffic from the audit pod (100.97.190.131/32) on worker (172.20.50.151) going to the master (172.20.57.132) looks like on its way out:
A crude, but certainly workable stop-gap is to edit the
to be something like:
Now, the run looks like:
Of course, these workarounds aren't needed with 1.8.x+ and CNI plugins (like calico) that support egress |
Thanks for the review, indeed it wasn't enough to block traffic from pods CIDR. |
Again, very clever! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
What
kops
version are you running?Version 1.7.1
What Kubernetes version are you running?
kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
What cloud provider are you using?
AWS
What commands did you run? What is the simplest way to reproduce this issue?
What happened after the commands executed?
The cluster isn't working. Kubectl fails connecting to the API Server:
kubectl get pods
returns an IO Timeout.What did you expect to happen?
My cluster is running and safe from https://github.com/kayrus/kubelet-exploit kind of attacks.
Please provide your cluster manifest.
I want to setup a cluster with kops where pods can't talk directly to the kubelet. I think I have to set
anonymousAuth: false
and it should work but it doesn't.I looked in other issues and the documentation, tried a few things but nothing worked.
Maybe the procedure should be easier to find/more explicit.
The text was updated successfully, but these errors were encountered: