Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
CVE-2019-11245: v1.14.2, v1.13.6: container uid changes to root after first restart or if image is already pulled to the node #78308
CVSS:3.0/AV:L/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:L, 4.9 (medium)
In kubelet v1.13.6 and v1.14.2, containers for pods that do not specify an explicit
CVE-2019-11245 will be fixed in the following Kubernetes releases:
Fixed by #78261 in master
If a pod is run without any user controls specified in the pod spec (like
This section lists possible mitigations to use prior to upgrading.
original issue description follows
When I launch a pod from a docker image that specifies a USER in the Dockerfile, the container only runs as that user on its first launch. After that the container runs as UID=0.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Also repeatable using the older minikube ISO that uses Docker 18.06.3-ce:
Evidently this could not be replicated using CRIO as a container runtime. It's unclear to me at this time if this is a Kubernetes/Docker integration issue or a minikube environmental issue.
Please be aware, that this is not only happening for restarting containers, but also when deploying two containers from the same image. Example to be found at kubernetes/minikube#4369, where I had one container for the app and the same image used for a job, resulting in the job container running as uid=0.
I haven't tested what happens when scaling via controller or manually adding another pod.