-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Init container security context extended to containers #65314
Comments
Are you sure that "docker-elasticsearch-0" is the name of your pod? Your containers should have names like |
Yes that's the name of the pod (This pod is part of a |
I have the same issue I'll post the details when I get back to my computer. In this case it is the istio sidecar that uses a privileged init container, a pod without the sidecar doesn't have this issue |
Here is the details. I'm running Kubernetes 1.10.3. My deployment looks like this: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
labels:
app: echo
name: echo-deployment
namespace: default
spec:
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- image: XXX/echo:2.0
imagePullPolicy: Always
name: echo-container
ports:
- containerPort: 80
protocol: TCP
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: simcorp-registry
nodeSelector:
beta.kubernetes.io/os: linux
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30 Because of the automatic Istio sidecar injection the the resulting pod, looks like this, as you can see the init container is privileged: apiVersion: v1
kind: Pod
metadata:
annotations:
sidecar.istio.io/status: '{"version":"55c9e544b52e1d4e45d18a58d0b34ba4b72531e45fb6d1572c77191422556ffc","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
labels:
app: echo
name: echo-deployment-57d5c75958-prtsp
namespace: default
spec:
containers:
- image: XXXX/echo:2.0
imagePullPolicy: Always
name: echo-container
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-8x5xs
readOnly: true
- args:
- proxy
- sidecar
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- echo
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15007
- --discoveryRefreshDelay
- 10s
- --zipkinAddress
- zipkin.telemetry:9411
- --connectTimeout
- 10s
- --statsdUdpAddress
- istio-statsd-prom-bridge.istio-system:9125
- --proxyAdminPort
- "15000"
- --controlPlaneAuthPolicy
- NONE
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
image: docker.io/istio/proxyv2:0.8.0
imagePullPolicy: IfNotPresent
name: istio-proxy
resources:
requests:
cpu: 100m
memory: 128Mi
securityContext:
privileged: false
readOnlyRootFilesystem: true
runAsUser: 1337
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: simcorp-registry
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- 80,
- -d
- ""
image: docker.io/istio/proxy_init:0.8.0
imagePullPolicy: IfNotPresent
name: istio-init
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
nodeName: k8s-linuxpool-30369698-2
nodeSelector:
beta.kubernetes.io/os: linux
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-8x5xs
secret:
defaultMode: 420
secretName: default-token-8x5xs
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
defaultMode: 420
optional: true
secretName: istio.default When I try to exec into the pod:
Also if I try to specify the specific container:
|
/sig auth @kubernetes/sig-auth-bugs |
@tallclair do you know if anything set up by an init container can bleed into something the main containers have access to? things mounted into shared volumes, setcap changes on executables, etc |
I have the same issue with kubernetes v1.11.1 when I try to increase the system vm.max_map_count value for cassandra using the init container. I also tried to set privileged: false in the security context of the cassandra container, but it doesn't override the value set in the init container definition. |
It appears this is a common issue. I'm having the same issue with the securityContext from the initContainer being applied to the container itself. Here are the tests I performed:
It seems the issue isn't that the securityContext is inherited from the initContainer, but rather the presence of the initContainer imposes some securityContext default that cannot be changed. |
This is closely related to #55435 Specifically, what is happening is that the DenyEscalatingExec plugin doesn't look at the specific container being exec'd to, but instead looks at all containers in the pod: kubernetes/plugin/pkg/admission/exec/admission.go Lines 132 to 151 in 03df9aa
I could see an argument for getting the specific container from the ExecRequest, and only checking the privileged status of that container. OTOH, in a lot of cases we consider the pod to be the security boundary, so you may not want to allow execing into an unpriviged sidecar of a privileged workload, as it could lead to a privilege escalation depending on how much they share. That said, I don't think the DenyEscalatingExec plugin is very well maintained, and there are some issues with the way it's policy is expressed (or the lack there of). I'm working on a proposal to overhaul PodSecurityPolicy, and I'll try to encorporate this use-case. |
related? ... #65408 ( |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
since we do not know what the initContainer did while privileged, we cannot safely assume that nothing done with privileged abilities was made available to the normal containers in the pod. Since the DenyEscalatingExec admission plugin is optional (and not on by default), I don't anticipate relaxing this restriction. If you want to apply different exec policies, a validating admission webhook can be used instead. /close |
@liggitt: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Created a deployment with an
initContainer
which has a privileged security context and this privileged context extended to the container of the deployment.What you expected to happen:
That the security context defined for initContainers does not extend to containers.
How to reproduce it (as minimally and precisely as possible):
Deployment
Anything else we need to know?:
Environment:
kubectl version
): 1.10.2The text was updated successfully, but these errors were encountered: