Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PodSecurityPolicy choose the wrong one, making pod unable to start #71787

Closed
nmiculinic opened this issue Dec 6, 2018 · 8 comments
Closed

PodSecurityPolicy choose the wrong one, making pod unable to start #71787

nmiculinic opened this issue Dec 6, 2018 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.

Comments

@nmiculinic
Copy link
Contributor

What happened:
I start a deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  strategy: 
    type: Recreate
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx

The controllers do their thing and I end up with following pod:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    container.apparmor.security.beta.kubernetes.io/nginx: runtime/default
    kubernetes.io/psp: restricted
    seccomp.security.alpha.kubernetes.io/pod: docker/default
  creationTimestamp: null
  generateName: nginx-55bd7c9fd-
  labels:
    app: nginx
    pod-template-hash: 55bd7c9fd
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: nginx-55bd7c9fd
    uid: 16c05d60-f94a-11e8-9110-023487ab05c0
  selfLink: /api/v1/namespaces/default/pods/nginx-55bd7c9fd-7kgt5
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: nginx
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      procMount: Default
      runAsNonRoot: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
  dnsPolicy: ClusterFirst
  nodeName: ip-10-88-25-201.eu-central-1.compute.internal
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1
    supplementalGroups:
    - 1
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
status:
  phase: Pending
  qosClass: BestEffort

and since nginx starts with root user it fails. Horribly.

However there is another security policy, semi-restricted:

apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"extensions/v1beta1","kind":"PodSecurityPolicy","metadata":{"annotations":{"apparmor.security.beta.kubernetes.io/allowedProfileNames":"runtime/default","apparmor.security.beta.kubernetes.io/defaultProfileName":"runtime/default","seccomp.security.alpha.kubernetes.io/allowedProfileNames":"docker/default","seccomp.security.alpha.kubernetes.io/defaultProfileName":"docker/default"},"labels":{"kubernetes.io/cluster-service":"true"},"name":"semi-restricted"},"spec":{"allowPrivilegeEscalation":false,"allowedCapabilities":["CAP_NET_BIND_SERVICE"],"forbiddenSysctls":["*"],"fsGroup":{"rule":"RunAsAny"},"runAsUser":{"rule":"RunAsAny"},"seLinux":{"rule":"RunAsAny"},"supplementalGroups":{"rule":"RunAsAny"},"volumes":["configMap","emptyDir","projected","secret","downwardAPI","persistentVolumeClaim"]}}
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  creationTimestamp: null
  labels:
    kubernetes.io/cluster-service: "true"
  name: semi-restricted
  selfLink: /apis/extensions/v1beta1/podsecuritypolicies/semi-restricted
spec:
  allowPrivilegeEscalation: false
  allowedCapabilities:
  - CAP_NET_BIND_SERVICE
  forbiddenSysctls:
  - '*'
  fsGroup:
    rule: RunAsAny
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - configMap
  - emptyDir
  - projected
  - secret
  - downwardAPI
  - persistentVolumeClaim

which ought to be selected.

The service account has permissions for both policies

 $ kubectl auth can-i --as system:serviceaccount:default:default use podsecuritypolicy/restricted
yes
 $kubectl auth can-i --as system:serviceaccount:default:default use podsecuritypolicy/semi-restricted
yes

What you expected to happen:
The pod wouldn't have it's security context modified as much, and admission controler would pick semi-restricted security policy.

How to reproduce it (as minimally and precisely as possible):
It's described above

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-28T15:20:58Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:46:57Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Ubuntu 18.04 LTS
  • Kernel (e.g. uname -a): Ubuntu 18.04 LTS, amd 64
  • Install tools: kubespray
  • Others:

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 6, 2018
@k8s-ci-robot
Copy link
Contributor

@nmiculinic: There are no sig labels on this issue. Please add a sig label by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Dec 6, 2018
@nmiculinic
Copy link
Contributor Author

There's also a workaround if I renamed my semi-restricted to 10-semi-restricted for example to force this one to be chosen over restricted one... not pretty though.

@liggitt
Copy link
Member

liggitt commented Dec 6, 2018

the order policies are chosen in is described in https://kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-order

  1. If any policies successfully validate the pod without altering it, they are used.
  2. If it is a pod creation request, then the first valid policy in alphabetical order is used.

/close

@k8s-ci-robot
Copy link
Contributor

@liggitt: Closing this issue.

In response to this:

the order policies are chosen in is described in https://kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-order

  1. If any policies successfully validate the pod without altering it, they are used.
  2. If it is a pod creation request, then the first valid policy in alphabetical order is used.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@nmiculinic
Copy link
Contributor Author

  1. If any policies successfully validate the pod without altering it, they are used.

Policy restricted shouldn't have been validated in the first place since the pod has containers running as root, which is in the contraction with the pod security policy.

/open

@liggitt
Copy link
Member

liggitt commented Dec 6, 2018

Policy restricted shouldn't have been validated in the first place since the pod has containers running as root, which is in the contraction with the pod security policy.

PodSecurityPolicy uses the information in the pod spec, and the pod spec did not indicate the container required running as root.

@nmiculinic
Copy link
Contributor Author

So if I set pod.spec.securityContext.runAsUser to 0 the psp shall do the correct thing?

@liggitt
Copy link
Member

liggitt commented Dec 6, 2018

So if I set pod.spec.securityContext.runAsUser to 0 the psp shall do the correct thing?

Yes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

3 participants