Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod Security Policy does not prevent pods from running as root #53063

Closed
definitelyuncertain opened this issue Sep 26, 2017 · 6 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/auth Categorizes an issue or PR as relevant to SIG Auth.

Comments

@definitelyuncertain
Copy link

@kubernetes/sig-api-machinery-bugs

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
When I implement the basic PSP given here, and then run a container that explicitly asks for root privileges as given in the same link, it executes successfully. For reference, the PSP and the container are as follows:

apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restrict-root
spec:
  privileged: false
  runAsUser:
    rule: MustRunAsNonRoot
  seLinux:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - '*'

Container:

kubectl run --image=bitnami/mariadb:10.1.24-r2 mymariadb --port=3306 --env="MARIADB_ROOT_PASSWORD=gue55m3"

What you expected to happen:
Container fails to run with the error

container has runAsNonRoot and image will run as root

As far as I can tell, PSP is available by default and I shouldn't have to do anything specific to enable it.

How to reproduce it (as minimally and precisely as possible):
On a cluster set up using kubeadm, join a node and apply the aforementioned PSP and then run a container that explicitly asks for root.

Anything else we need to know?:
I have also tested this with my own pods, and the result is the same: the pods are scheduled and run as root.

However, if I specify a UID for runAsUser in the pod description, the pod does run as that user.

Also, kubectl get psp shows the PSP I have applied with correct information.

Usual Kubernetes functionality seems to be working fine, but I noticed that /etc/kubernetes/manifests/kube-apiserver.json is missing on the master.

Environment:

  • Kubernetes version (use kubectl version):
    1.7.5 on both master and worker nodes.

  • Cloud provider or hardware configuration**:
    One master node (VM on laptop), one worker node (another physical machine).

  • OS (e.g. from /etc/os-release):
    Ubuntu 16.04.2 on master, Linux Mint 18.1 on worker.

  • Kernel (e.g. uname -a):
    4.8 on master, 4.4 on worker

  • Install tools:
    kubeadm to initialize

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Sep 26, 2017
@k8s-ci-robot
Copy link
Contributor

@definitelyuncertain: Reiterating the mentions to trigger a notification:
@kubernetes/sig-api-machinery-bugs

In response to this:

@kubernetes/sig-api-machinery-bugs

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
When I implement the basic PSP given here, and then run a container that explicitly asks for root privileges as given in the same link, it executes successfully. For reference, the PSP and the container are as follows:

apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: restrict-root
spec:
privileged: false
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'

Container:

kubectl run --image=bitnami/mariadb:10.1.24-r2 mymariadb --port=3306 --env="MARIADB_ROOT_PASSWORD=gue55m3"

What you expected to happen:
Container fails to run with the error

container has runAsNonRoot and image will run as root

As far as I can tell, PSP is available by default and I shouldn't have to do anything specific to enable it.

How to reproduce it (as minimally and precisely as possible):
On a cluster set up using kubeadm, join a node and apply the aforementioned PSP and then run a container that explicitly asks for root.

Anything else we need to know?:
I have also tested this with my own pods, and the result is the same: the pods are scheduled and run as root.

However, if I specify a UID for runAsUser in the pod description, the pod does run as that user.

Also, kubectl get psp shows the PSP I have applied with correct information.

Usual Kubernetes functionality seems to be working fine, but I noticed that /etc/kubernetes/manifests/kube-apiserver.json is missing on the master.

Environment:

  • Kubernetes version (use kubectl version):
    1.7.5 on both master and worker nodes.

  • Cloud provider or hardware configuration**:
    One master node (VM on laptop), one worker node (another physical machine).

  • OS (e.g. from /etc/os-release):
    Ubuntu 16.04.2 on master, Linux Mint 18.1 on worker.

  • Kernel (e.g. uname -a):
    4.8 on master, 4.4 on worker

  • Install tools:
    kubeadm to initialize

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ericchiang ericchiang added the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Sep 26, 2017
@liggitt
Copy link
Member

liggitt commented Sep 26, 2017

some questions:

  • are you enabling the PodSecurityPolicy admission plugin?
  • what does the pod manifest in the API look like when the pod is running?

@liggitt liggitt removed the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Sep 26, 2017
@definitelyuncertain
Copy link
Author

are you enabling the PodSecurityPolicy admission plugin?

I looked around to see if I had to do anything and based on this and this, I tried

export KUBE_RUNTIME_CONFIG="extensions/v1beta1/podsecuritypolicy=true"

I guessed if at all it has to be enabled, this would do it, but it still doesn't work. Should I have enabled it elsewhere ? If so, would you mind explaining how?

what does the pod manifest in the API look like when the pod is running?

Here's the manifest I grabbed by running kubectl edit pods <podname>

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/created-by: |
      {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"mymariadb-4112233114","uid":"7ed9a2b2-a2e1-11e7-ba31-080027b3a996","apiVersion":"extensions","resourceVersion":"12655"}}
  creationTimestamp: 2017-09-26T17:38:23Z
  generateName: mymariadb-4112233114-
  labels:
    pod-template-hash: "4112233114"
    run: mymariadb
  name: mymariadb-4112233114-78cgc
  namespace: default
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: mymariadb-4112233114
    uid: 7ed9a2b2-a2e1-11e7-ba31-080027b3a996
  resourceVersion: "12672"
  selfLink: /api/v1/namespaces/default/pods/mymariadb-4112233114-78cgc
  uid: 7ee300af-a2e1-11e7-ba31-080027b3a996
spec:
  containers:
  - env:
    - name: MARIADB_ROOT_PASSWORD
      value: gue55m3
    image: bitnami/mariadb:10.1.24-r2
    imagePullPolicy: IfNotPresent
    name: mymariadb
    ports:
    - containerPort: 3306
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-2fqrk
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: deep-thought
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.alpha.kubernetes.io/notReady
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.alpha.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-2fqrk
    secret:
      defaultMode: 420
      secretName: default-token-2fqrk
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2017-09-26T17:38:23Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2017-09-26T17:38:24Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2017-09-26T17:38:23Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://2552c0228e46f84893e2fe85e1137669b08ed9a3a67f89317af9b60ad7f31ba2
    image: bitnami/mariadb:10.1.24-r2
    imageID: docker://sha256:a18a3dacc761d4eb2450351b842411b27b8313c12089940257c8b16e1286d7d2
    lastState: {}
    name: mymariadb
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2017-09-26T17:38:24Z
  hostIP: 192.168.1.100
  phase: Running
  podIP: 10.44.0.9
  qosClass: BestEffort
  startTime: 2017-09-26T17:38:23Z

The securityContext field is empty, which is to be expected.

@liggitt
Copy link
Member

liggitt commented Sep 26, 2017

I guessed if at all it has to be enabled, this would do it, but it still doesn't work. Should I have enabled it elsewhere ? If so, would you mind explaining how?

That enables the API objects, but does not enable the admission plugin. You must set the apiserver --admission-control=...,PodSecurityPolicy,... flag

@definitelyuncertain
Copy link
Author

I'm supposed to set this in /etc/kubernetes/manifests/kube-apiserver.json if I understand correctly, but that isn't present as I mentioned before. Is there any other way I can set it? kubeadm also doesn't take this as an argument apparently.

@liggitt
Copy link
Member

liggitt commented Sep 26, 2017

in the apiserver manifest kubeadm generates, there should be an arg like --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota

modifying that to include PodSecurityPolicy (probably just before ResourceQuota) should enable it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/auth Categorizes an issue or PR as relevant to SIG Auth.
Projects
None yet
Development

No branches or pull requests

4 participants