Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

add DenyEscalatingExec admission controller #1961

Merged
merged 1 commit into from Dec 21, 2017
Merged

add DenyEscalatingExec admission controller #1961

merged 1 commit into from Dec 21, 2017

Conversation

pidah
Copy link
Contributor

@pidah pidah commented Dec 21, 2017

What this PR does / why we need it:
Currently there are a few pods running privileged containers like kube-proxy and calico-node. If these containers are compromised, an attacker can easily compromise the underlying host. Given that these particular pods require privileged access, this PR adds the DenyEscalatingExec admission controller flag which prevents attaching or exec'ing into privileged pods running in the cluster. More info here: https://kubernetes.io/docs/admin/admission-controllers/#denyescalatingexec

Before this flag is applied you have the following:

kubectl exec  calico-node-06742  --namespace=kube-system 'ls'
Defaulting container name to calico-node.
Use 'kubectl describe pod/calico-node-06742' to see all of the containers in this pod.
bin
dev
etc
home
lib
lib64
licenses
media
mnt
proc
root
run
sbin
srv
startup.env
sys
tmp
usr
var

After the flag is applied an exec operation is denied:

kubectl exec  calico-node-06742  --namespace=kube-system 'ls'
Use 'kubectl describe pod/calico-node-06742' to see all of the containers in this pod.
Error from server (InternalError): Internal error occurred: [cannot exec into or attach to a privileged container, object does not implement the Object interfaces]

@jackfrancis
Copy link
Member

@slack @brendanburns @seanknox Is there any reason we want to not include DenyEscalatingExec as a required, static option for api server?

@brendandburns
Copy link
Member

Nope, makes sense to me.

jackfrancis
jackfrancis previously approved these changes Dec 21, 2017
Copy link
Member

@jackfrancis jackfrancis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@seanknox
Copy link
Contributor

Great addition. Thanks @pidah.

@jalberto
Copy link

jalberto commented Apr 3, 2018

@jackfrancis this is a breaking compatibility change, now I am not able to run my CI system. How to solve ti wihtout deploying a new cluster?

@jackfrancis
Copy link
Member

@jalberto Current versions of acs-engine allow for user-configurable --admission-control values, it's in the apiServerConfig. See here:

https://github.com/Azure/acs-engine/blob/master/docs/clusterdefinition.md#apiserverconfig

E.g., in your api model:

"kubernetesConfig": {
    "apiServerConfig": {
        "--admission-control": "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,AlwaysPullImages"
    }
}

@jalberto
Copy link

jalberto commented Apr 5, 2018

@jackfrancis yes, I found it but:

  • how I apply it to an existing cluster?
  • this is a breaking change not documented (as 2 versions of acs-engine will create different config)

@jackfrancis
Copy link
Member

It is documented in the above link.

To manually change on a cluster master node:

sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
sudo systemctl restart kubelet 

@jalberto
Copy link

jalberto commented Apr 6, 2018

is not documented as a breaking change. When a breaking change is introduced it should be notified properly and an upgrade path should be provided.

@jackfrancis
Copy link
Member

I agree with you. Again, thanks for your patience/stamina.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants