Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CVE-2019-11245: v1.14.2, v1.13.6: container uid changes to root after first restart or if image is already pulled to the node #78308

Closed
sherbang opened this issue May 24, 2019 · 16 comments

Comments

Projects
None yet
@sherbang
Copy link

commented May 24, 2019

CVSS:3.0/AV:L/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:L, 4.9 (medium)

In kubelet v1.13.6 and v1.14.2, containers for pods that do not specify an explicit runAsUser attempt to run as uid 0 (root) on container restart, or if the image was previously pulled to the node. If the pod specified mustRunAsNonRoot: true, the kubelet will refuse to start the container as root. If the pod did not specify mustRunAsNonRoot: true, the kubelet will run the container as uid 0.

CVE-2019-11245 will be fixed in the following Kubernetes releases:

Fixed by #78261 in master

Affected components:

  • Kubelet

Affected versions:

  • Kubelet v1.13.6
  • Kubelet v1.14.2

Affected configurations:

Clusters with:

  • Kubelet versions v1.13.6 or v1.14.2
  • Pods that do not specify an explicit runAsUser: <uid> or mustRunAsNonRoot:true

Impact:

If a pod is run without any user controls specified in the pod spec (like runAsUser: <uid> or mustRunAsNonRoot:true), a container in that pod that would normally run as the USER specified in the container image manifest can sometimes be run as root instead (on container restart, or if the image was previously pulled to the node)

  • pods that specify an explicit runAsUser are unaffected and continue to work properly
  • podSecurityPolicies that force a runAsUser setting are unaffected and continue to work properly
  • pods that specify mustRunAsNonRoot:true will refuse to start the container as uid 0, which can affect availability
  • pods that do not specify runAsUser or mustRunAsNonRoot:true will run as uid 0 on restart or if the image was previously pulled to the node

Mitigations:

This section lists possible mitigations to use prior to upgrading.

  • Specify runAsUser directives in pods to control the uid a container runs as
  • Specify mustRunAsNonRoot:true directives in pods to prevent starting as root (note this means the attempt to start the container will fail on affected kubelet versions)
  • Downgrade kubelets to v1.14.1 or v1.13.5 as instructed by your Kubernetes distribution.

original issue description follows

What happened:

When I launch a pod from a docker image that specifies a USER in the Dockerfile, the container only runs as that user on its first launch. After that the container runs as UID=0.

What you expected to happen:
I expect the container to act consistently every launch, and probably with the USER specified in the container.

How to reproduce it (as minimally and precisely as possible):
Testing with minikube (same test specifying v1.14.1, kubectl logs test always returns 11211):

$ minikube start --kubernetes-version v1.14.2
😄  minikube v1.1.0 on linux (amd64)
💿  Downloading Minikube ISO ...
 131.28 MB / 131.28 MB [============================================] 100.00% 0s
🔥  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
🐳  Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
💾  Downloading kubeadm v1.14.2
💾  Downloading kubelet v1.14.2
🚜  Pulling images ...
🚀  Launching Kubernetes ... 
⌛  Verifying: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"
$ cat test.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: test
    image: memcached:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/bash"]
    args:
    - -c
    - 'id -u; sleep 30'
$ kubectl apply -f test.yaml 
pod/test created

# as soon as pod starts
$ kubectl logs test
11211
# Wait 30 seconds for container to restart
$ kubectl logs test
0
# Try deleting/recreating the pod
$ kubectl delete pod test
pod "test" deleted
$ kubectl apply -f test.yaml 
pod/test created
$ kubectl logs test
0

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): I get the results I expect in v1.13.5 and v1.14.1. The problem exists in v1.13.6 and v1.14.2
  • Cloud provider or hardware configuration: minikube v1.1.0 using VirtualBox
  • OS (e.g: cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:
@sherbang

This comment has been minimized.

Copy link
Author

commented May 24, 2019

I'm guessing this is @kubernetes/sig-apps-bugs ?

@k8s-ci-robot k8s-ci-robot added sig/apps and removed needs-sig labels May 24, 2019

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented May 24, 2019

@sherbang: Reiterating the mentions to trigger a notification:
@kubernetes/sig-apps-bugs

In response to this:

I'm guessing this is @kubernetes/sig-apps-bugs ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tstromberg

This comment has been minimized.

Copy link

commented May 24, 2019

Also repeatable using the older minikube ISO that uses Docker 18.06.3-ce:

minikube start --iso-url="https://storage.googleapis.com/minikube/iso/minikube-v1.0.1.iso" --vm-driver=k vm2

Evidently this could not be replicated using CRIO as a container runtime. It's unclear to me at this time if this is a Kubernetes/Docker integration issue or a minikube environmental issue.

@tarioch

This comment has been minimized.

Copy link

commented May 24, 2019

I'm encountering the same issue on 1 node kubernetes cluster setup with kubeadm

@tarioch

This comment has been minimized.

Copy link

commented May 24, 2019

After downgrade to 1.14.1 it works again (kubelet and control plane). So looks like this is not tied to minikube.

@tallclair

This comment has been minimized.

Copy link
Member

commented May 24, 2019

Might be fixed by #78261

@kow3ns

This comment has been minimized.

Copy link
Member

commented May 28, 2019

/sig node

@kow3ns

This comment has been minimized.

Copy link
Member

commented May 28, 2019

/remove-sig apps

@k8s-ci-robot k8s-ci-robot removed the sig/apps label May 28, 2019

poikilotherm added a commit to IQSS/dataverse-kubernetes that referenced this issue May 29, 2019

poikilotherm added a commit to IQSS/dataverse-kubernetes that referenced this issue May 29, 2019

poikilotherm added a commit to IQSS/dataverse-kubernetes that referenced this issue May 29, 2019

@liggitt liggitt changed the title Container uid changes after first restart CVE-2019-11245: v1.14.2, container uid changes after first restart May 30, 2019

@liggitt liggitt changed the title CVE-2019-11245: v1.14.2, container uid changes after first restart CVE-2019-11245: container uid changes after first restart in v1.14.2, v1.13.6 May 30, 2019

@poikilotherm

This comment has been minimized.

Copy link

commented May 30, 2019

Please be aware, that this is not only happening for restarting containers, but also when deploying two containers from the same image. Example to be found at kubernetes/minikube#4369, where I had one container for the app and the same image used for a job, resulting in the job container running as uid=0.

I haven't tested what happens when scaling via controller or manually adding another pod.

@liggitt liggitt changed the title CVE-2019-11245: container uid changes after first restart in v1.14.2, v1.13.6 CVE-2019-11245: container uid changes to root after first restart in v1.14.2, v1.13.6 May 30, 2019

@liggitt

This comment has been minimized.

Copy link
Member

commented May 30, 2019

hoisted comment up to description

@liggitt liggitt changed the title CVE-2019-11245: container uid changes to root after first restart in v1.14.2, v1.13.6 CVE-2019-11245: v1.14.2, v1.13.6: container uid changes to root after first restart or if image is already pulled to the node May 30, 2019

jackfrancis added a commit to jackfrancis/aks-engine that referenced this issue May 30, 2019

jackfrancis added a commit to Azure/aks-engine that referenced this issue May 31, 2019

@erkanerol

This comment has been minimized.

Copy link

commented May 31, 2019

  • pods that do not specify runAsUser or mustRunAsNonRoot:true will run as uid 0 on restart or if the image was previously pulled to the node

I am not sure about this statement. Could you please check this issue?
kubernetes/website#14574

@rtheis

This comment has been minimized.

Copy link

commented May 31, 2019

I have not been able replicate this problem with 1.13.6 or 1.14.2 using containerd as the container runtime.

@Random-Liu

This comment has been minimized.

Copy link
Member

commented May 31, 2019

Here is a fix to the root cause in dockershim #78603.

@yujuhong

This comment has been minimized.

Copy link
Member

commented May 31, 2019

The only known runtime/shim that is affected by this is dockershim.

hiddeco added a commit to weaveworks/flux that referenced this issue Jun 4, 2019

Configure security context memcached pod
To keep our example manifets in line with the recent changes in the
chart (#2107), and to ensure (minikube) users do not experience issues
with memcached failing to start, due to CVE-2019-11245.

Ref: kubernetes/kubernetes#78308

hiddeco added a commit to weaveworks/flux that referenced this issue Jun 4, 2019

Configure security context memcached pod
To keep our example manifests in line with the recent changes in the
chart (#2107), and to ensure (minikube) users do not experience issues
with memcached failing to start, due to CVE-2019-11245.

Ref: kubernetes/kubernetes#78308

@tariq1890 tariq1890 referenced this issue Jun 4, 2019

Merged

chore(glide): bump kubernetes to 1.14.2 #5820

0 of 3 tasks complete
@liggitt

This comment has been minimized.

Copy link
Member

commented Jun 6, 2019

v1.13.7 and v1.14.3 have been released

/close

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Jun 6, 2019

@liggitt: Closing this issue.

In response to this:

v1.13.7 and v1.14.3 have been released

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.