-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change containerSecurityContext rendering and add docs #6687
Change containerSecurityContext rendering and add docs #6687
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
I am not sure what the changes introduces. "with" is true according to the same rules as "if". So how a falsy value ( A better way would be to state it unconditionally with sane default value. |
/assign |
I still think that the issue is inside their cluster and not our helm chart. I'll test tomorrow on a clean v1.21 k8s cluster to confirm. This change does not actually change anything for us as @desaintmartin said. |
Signed-off-by: Jan Lauber <jan.lauber@protonmail.ch>
Hold until I test this on a clean cluster. /hold |
@floreks Thanks for your work! |
OK Understood, your comment is clear and solution without it may be not immediately obvious (as you actually need to ADD a line in your custom values to unset a parameter in the default values). Could you bump the patch version (according to semver) in chart.yaml and sign your commit for CLA? |
Signed-off-by: Jan Lauber <jan.lauber@protonmail.ch>
@desaintmartin |
One last thing: could you edit PR name? |
sth like that? |
Yes, thanks! |
probably |
I let @floreks remove the lock he put if he agrees. :) |
Environment
Installation
Default ValuesScenario 1 (default values)ResultDeployment spec contains the below security context spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001 Pod spec contains the below security context spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001 Scenario 2 (security context null)In this scenario, we'll set the securityContext: null ResultDeployment spec contains the below security context spec:
containers:
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001 Pod spec contains the below security context spec:
containers:
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001 Scenario 3 (security context null + container security context null)In this scenario, we'll also set the ResultBoth deployment spec and pod spec do not contain any security context. |
Unfortunately, it appears that security context configuration, in general, is scattered around the General context added on the dashboard/aio/deploy/helm-chart/kubernetes-dashboard/values.yaml Lines 73 to 76 in 1148f7b
Metrics scraper context added on the dashboard/aio/deploy/helm-chart/kubernetes-dashboard/values.yaml Lines 239 to 244 in 1148f7b
Dashboard context added on the dashboard/aio/deploy/helm-chart/kubernetes-dashboard/values.yaml Lines 312 to 317 in 1148f7b
@janlauber @desaintmartin it is actually a mistake on our side as it is indeed misleading how to control this configuration. It should be refactored and simplified. |
Indeed, at least what we can do is put the Regarding |
Couldn't we just use a single configuration for everything? We anyway want to use the same security context for both dashboard and scraper. |
@floreks I think this would make sense to use a single configuration for both. |
I think you can start working on that if you want 🙂 |
@floreks Adding a |
I am afraid it would add complexity to the chart. What could be done is something simple (but well documented in the values.yaml):
What do you think? |
Signed-off-by: Jan Lauber <jan.lauber@protonmail.ch>
@desaintmartin & @floreks helm template -f values-test.yaml --dry-run . 1. default values in custom values-test.yaml: # empty Output of the Deployment: apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-kubernetes-dashboard
annotations:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.1.1
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/component: kubernetes-dashboard
template:
metadata:
annotations:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.1.1
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: RELEASE-NAME-kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: "kubernetesui/dashboard:v2.4.0"
imagePullPolicy: IfNotPresent
args:
- --namespace=default
- --auto-generate-certificates
- --metrics-provider=none
ports:
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
resources:
limits:
cpu: 2
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: RELEASE-NAME-kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {} 2. default values in custom values-test.yaml with metrics scraper enabled: metricsScraper:
enabled: true Output of the Deployment: apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-kubernetes-dashboard
annotations:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.1.1
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/component: kubernetes-dashboard
template:
metadata:
annotations:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.1.1
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: RELEASE-NAME-kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: "kubernetesui/dashboard:v2.4.0"
imagePullPolicy: IfNotPresent
args:
- --namespace=default
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
ports:
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
resources:
limits:
cpu: 2
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
- name: dashboard-metrics-scraper
image: "kubernetesui/metrics-scraper:v1.0.7"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: RELEASE-NAME-kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {} 3. custom metric scraper values in custom values-test.yaml: metricsScraper:
enabled: true
containerSecurityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1002 # value for separation
runAsGroup: 2002 # value for separation Output of the Deployment: apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-kubernetes-dashboard
annotations:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.1.1
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/component: kubernetes-dashboard
template:
metadata:
annotations:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.1.1
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: RELEASE-NAME-kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: "kubernetesui/dashboard:v2.4.0"
imagePullPolicy: IfNotPresent
args:
- --namespace=default
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
ports:
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
resources:
limits:
cpu: 2
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
- name: dashboard-metrics-scraper
image: "kubernetesui/metrics-scraper:v1.0.7"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2002
runAsUser: 1002
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: RELEASE-NAME-kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {} 4. disable globally securityContext and containerSecurityContext in custom values-test.yaml: securityContext: null
containerSecurityContext: null Output of the Deployment: apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-kubernetes-dashboard
annotations:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.1.1
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/component: kubernetes-dashboard
template:
metadata:
annotations:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.1.1
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
spec:
serviceAccountName: RELEASE-NAME-kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: "kubernetesui/dashboard:v2.4.0"
imagePullPolicy: IfNotPresent
args:
- --namespace=default
- --auto-generate-certificates
- --metrics-provider=none
ports:
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
resources:
limits:
cpu: 2
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: RELEASE-NAME-kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {} 5. disable globally securityContext and containerSecurityContext but set metrics containerSecurityContext in custom values-test.yaml: securityContext: null
containerSecurityContext: null
metricsScraper:
enabled: true
containerSecurityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1002 # value for separation
runAsGroup: 2002 # value for separation Output of the Deployment: apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-kubernetes-dashboard
annotations:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.1.1
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/component: kubernetes-dashboard
template:
metadata:
annotations:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.1.1
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "2.4.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
spec:
serviceAccountName: RELEASE-NAME-kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: "kubernetesui/dashboard:v2.4.0"
imagePullPolicy: IfNotPresent
args:
- --namespace=default
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
ports:
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
resources:
limits:
cpu: 2
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
- name: dashboard-metrics-scraper
image: "kubernetesui/metrics-scraper:v1.0.7"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2002
runAsUser: 1002
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: RELEASE-NAME-kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {} I hope this implementation is ok for you guys :) |
If there are no more concerns from @desaintmartin then this LGTM and we can merge. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: desaintmartin, janlauber The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@desaintmartin & @floreks Greez |
Facing the discussion of:
It adds the functionality of adding
securityContext: null
to your custom values.yaml in case of deploying the dashboard deployment without the securityContext inside your cluster.When this line is not added to your custom values.yaml file it will render the securityContext values as defined in the default values.yaml file.
Signed-off-by: Jan Lauber jan.lauber@protonmail.ch