Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change containerSecurityContext rendering and add docs #6687

Merged
merged 3 commits into from
Jan 14, 2022
Merged

Change containerSecurityContext rendering and add docs #6687

merged 3 commits into from
Jan 14, 2022

Conversation

janlauber
Copy link
Contributor

Facing the discussion of:

It adds the functionality of adding securityContext: null to your custom values.yaml in case of deploying the dashboard deployment without the securityContext inside your cluster.
When this line is not added to your custom values.yaml file it will render the securityContext values as defined in the default values.yaml file.

Signed-off-by: Jan Lauber jan.lauber@protonmail.ch

@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Jan 12, 2022
@desaintmartin
Copy link
Member

I am not sure what the changes introduces.

"with" is true according to the same rules as "if". So how a falsy value (null in your case) would be interpreted as true with if better than with?

A better way would be to state it unconditionally with sane default value.

@desaintmartin
Copy link
Member

/assign

@floreks
Copy link
Member

floreks commented Jan 12, 2022

I still think that the issue is inside their cluster and not our helm chart. I'll test tomorrow on a clean v1.21 k8s cluster to confirm. This change does not actually change anything for us as @desaintmartin said.

Signed-off-by: Jan Lauber <jan.lauber@protonmail.ch>
@floreks
Copy link
Member

floreks commented Jan 12, 2022

Hold until I test this on a clean cluster.

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 12, 2022
@janlauber
Copy link
Contributor Author

janlauber commented Jan 12, 2022

@floreks
So I've tested it again with securityContext: null locally and it works.
As @desaintmartin already said, the if / with wouldn't change anything.
So I reverted my last commit and added just one line of docs (comment) in the default values.yaml file so if anybody in the future also stumbles over it.

Thanks for your work!
greez

@desaintmartin
Copy link
Member

desaintmartin commented Jan 12, 2022

OK Understood, your comment is clear and solution without it may be not immediately obvious (as you actually need to ADD a line in your custom values to unset a parameter in the default values).

Could you bump the patch version (according to semver) in chart.yaml and sign your commit for CLA?

Signed-off-by: Jan Lauber <jan.lauber@protonmail.ch>
@janlauber
Copy link
Contributor Author

janlauber commented Jan 12, 2022

@desaintmartin
I bumped the version to 5.1.1 and the CLA check should be passed (signed the CLA over HelloSign right now).
Thanks and greez

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jan 12, 2022
@desaintmartin
Copy link
Member

One last thing: could you edit PR name?

@janlauber janlauber changed the title Change securityContext default values rendering Add securityContext value documentation Jan 12, 2022
@janlauber
Copy link
Contributor Author

sth like that?

@janlauber janlauber changed the title Add securityContext value documentation Add docs to disable securityContext Jan 12, 2022
@desaintmartin
Copy link
Member

Yes, thanks!
/lgtm

@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Jan 12, 2022
@janlauber
Copy link
Contributor Author

probably do-not-merge/hold label should get removed to merge

@desaintmartin
Copy link
Member

I let @floreks remove the lock he put if he agrees. :)

@floreks
Copy link
Member

floreks commented Jan 13, 2022

Environment

Kubernetes version: v1.21.1
Cluster deployment: Kind
Helm version: v3.7.0

Installation

helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard -f values.yaml

Default Values

values.yaml

Scenario 1 (default values)

Result

Deployment spec contains the below security context

spec:
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  containers:
    securityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        runAsGroup: 2001
        runAsUser: 1001

Pod spec contains the below security context

spec:
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  containers:
    securityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        runAsGroup: 2001
        runAsUser: 1001

Scenario 2 (security context null)

In this scenario, we'll set the securityContext to null in values.yaml file (line 74).

securityContext: null

Result

Deployment spec contains the below security context

spec:
  containers:
    securityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        runAsGroup: 2001
        runAsUser: 1001

Pod spec contains the below security context

spec:
  containers:
    securityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        runAsGroup: 2001
        runAsUser: 1001

Scenario 3 (security context null + container security context null)

In this scenario, we'll also set the containerSecurityContext to null in values.yaml file (line 313).

Result

Both deployment spec and pod spec do not contain any security context.

@floreks
Copy link
Member

floreks commented Jan 13, 2022

Unfortunately, it appears that security context configuration, in general, is scattered around the values.yaml file and it is not obvious how to control it properly. We should have one simple way of controlling security context configuration across all our deployments.

General context added on the spec level.

# SecurityContext to be added to kubernetes dashboard pods
securityContext:
seccompProfile:
type: RuntimeDefault

Metrics scraper context added on the spec.containers level.

## SecurityContext for the kubernetes dashboard metrics scraper container
containerSecurityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001

Dashboard context added on the spec.containers level.

## SecurityContext for the kubernetes dashboard container
containerSecurityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001

@janlauber @desaintmartin it is actually a mistake on our side as it is indeed misleading how to control this configuration. It should be refactored and simplified.

@desaintmartin
Copy link
Member

Indeed, at least what we can do is put the securityContext and containerSecurityContext next to each other!

Regarding metricsScraper, I do not think it would be wise to put a parameter tied to this container outside of this top-level value.

@floreks
Copy link
Member

floreks commented Jan 13, 2022

Couldn't we just use a single configuration for everything? We anyway want to use the same security context for both dashboard and scraper.

@janlauber
Copy link
Contributor Author

@floreks I think this would make sense to use a single configuration for both.
I can start trying to implement this in this PR if it's ok?
greez

@floreks
Copy link
Member

floreks commented Jan 13, 2022

I think you can start working on that if you want 🙂

@janlauber
Copy link
Contributor Author

janlauber commented Jan 13, 2022

@floreks
I would like to try something like in the kube-prometheus-stack helm chart:
https://github.com/prometheus-community/helm-charts/blob/c607b596cd6bc49a02b8e142108a91d1089fb781/charts/kube-prometheus-stack/values.yaml#L101-L105

Adding a global section to set some values globally.
Would this also make sense to you?
Or should I just implement the containerSecurityContext and securityContext in a global value (indent 0) once.

@desaintmartin
Copy link
Member

I am afraid it would add complexity to the chart. What could be done is something simple (but well documented in the values.yaml):

  • put securityContext and containerSecurityContext next to each other
  • if metricsScraper.containerSecurityContext is not set, take the value from containerSecurityContext, and make metricsScraper.containerSecurityContext null by default (commented out?)

What do you think?

Signed-off-by: Jan Lauber <jan.lauber@protonmail.ch>
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Jan 13, 2022
@janlauber
Copy link
Contributor Author

@desaintmartin & @floreks
I've just changed the behaviour of rendering the containerSecurityContext on a single values next to the securityContext.
Tests cases:
Helm template test command in dashboard/aio/deploy/helm-chart/kubernetes-dashboard

helm template -f values-test.yaml --dry-run .

1. default values in custom values-test.yaml:

# empty

Output of the Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-kubernetes-dashboard
  annotations:
  labels:
    app.kubernetes.io/name: kubernetes-dashboard
    helm.sh/chart: kubernetes-dashboard-5.1.1
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "2.4.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: kubernetes-dashboard
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
  selector:
    matchLabels:

      app.kubernetes.io/name: kubernetes-dashboard
      app.kubernetes.io/instance: RELEASE-NAME
      app.kubernetes.io/component: kubernetes-dashboard
  template:
    metadata:
      annotations:
      labels:
        app.kubernetes.io/name: kubernetes-dashboard
        helm.sh/chart: kubernetes-dashboard-5.1.1
        app.kubernetes.io/instance: RELEASE-NAME
        app.kubernetes.io/version: "2.4.0"
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: kubernetes-dashboard
    spec:
      securityContext:

        seccompProfile:
          type: RuntimeDefault
      serviceAccountName: RELEASE-NAME-kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: "kubernetesui/dashboard:v2.4.0"
        imagePullPolicy: IfNotPresent
        args:
          - --namespace=default
          - --auto-generate-certificates
          - --metrics-provider=none
        ports:
        - name: https
          containerPort: 8443
          protocol: TCP
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
        resources:

          limits:
            cpu: 2
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:

          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsGroup: 2001
          runAsUser: 1001
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: RELEASE-NAME-kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}

2. default values in custom values-test.yaml with metrics scraper enabled:

metricsScraper:
  enabled: true

Output of the Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-kubernetes-dashboard
  annotations:
  labels:
    app.kubernetes.io/name: kubernetes-dashboard
    helm.sh/chart: kubernetes-dashboard-5.1.1
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "2.4.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: kubernetes-dashboard
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
  selector:
    matchLabels:

      app.kubernetes.io/name: kubernetes-dashboard
      app.kubernetes.io/instance: RELEASE-NAME
      app.kubernetes.io/component: kubernetes-dashboard
  template:
    metadata:
      annotations:
      labels:
        app.kubernetes.io/name: kubernetes-dashboard
        helm.sh/chart: kubernetes-dashboard-5.1.1
        app.kubernetes.io/instance: RELEASE-NAME
        app.kubernetes.io/version: "2.4.0"
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: kubernetes-dashboard
    spec:
      securityContext:

        seccompProfile:
          type: RuntimeDefault
      serviceAccountName: RELEASE-NAME-kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: "kubernetesui/dashboard:v2.4.0"
        imagePullPolicy: IfNotPresent
        args:
          - --namespace=default
          - --auto-generate-certificates
          - --sidecar-host=http://127.0.0.1:8000
        ports:
        - name: https
          containerPort: 8443
          protocol: TCP
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
        resources:

          limits:
            cpu: 2
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:

          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsGroup: 2001
          runAsUser: 1001
      - name: dashboard-metrics-scraper
        image: "kubernetesui/metrics-scraper:v1.0.7"
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 8000
            protocol: TCP
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 8000
          initialDelaySeconds: 30
          timeoutSeconds: 30
        volumeMounts:
        - mountPath: /tmp
          name: tmp-volume
        securityContext:

          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsGroup: 2001
          runAsUser: 1001
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: RELEASE-NAME-kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}

3. custom metric scraper values in custom values-test.yaml:

metricsScraper:
  enabled: true
  containerSecurityContext:
    allowPrivilegeEscalation: false
    readOnlyRootFilesystem: true
    runAsUser: 1002 # value for separation
    runAsGroup: 2002 # value for separation

Output of the Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-kubernetes-dashboard
  annotations:
  labels:
    app.kubernetes.io/name: kubernetes-dashboard
    helm.sh/chart: kubernetes-dashboard-5.1.1
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "2.4.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: kubernetes-dashboard
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
  selector:
    matchLabels:

      app.kubernetes.io/name: kubernetes-dashboard
      app.kubernetes.io/instance: RELEASE-NAME
      app.kubernetes.io/component: kubernetes-dashboard
  template:
    metadata:
      annotations:
      labels:
        app.kubernetes.io/name: kubernetes-dashboard
        helm.sh/chart: kubernetes-dashboard-5.1.1
        app.kubernetes.io/instance: RELEASE-NAME
        app.kubernetes.io/version: "2.4.0"
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: kubernetes-dashboard
    spec:
      securityContext:

        seccompProfile:
          type: RuntimeDefault
      serviceAccountName: RELEASE-NAME-kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: "kubernetesui/dashboard:v2.4.0"
        imagePullPolicy: IfNotPresent
        args:
          - --namespace=default
          - --auto-generate-certificates
          - --sidecar-host=http://127.0.0.1:8000
        ports:
        - name: https
          containerPort: 8443
          protocol: TCP
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
        resources:

          limits:
            cpu: 2
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:

          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsGroup: 2001
          runAsUser: 1001
      - name: dashboard-metrics-scraper
        image: "kubernetesui/metrics-scraper:v1.0.7"
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 8000
            protocol: TCP
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 8000
          initialDelaySeconds: 30
          timeoutSeconds: 30
        volumeMounts:
        - mountPath: /tmp
          name: tmp-volume
        securityContext:

          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsGroup: 2002
          runAsUser: 1002
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: RELEASE-NAME-kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}

4. disable globally securityContext and containerSecurityContext in custom values-test.yaml:

securityContext: null
containerSecurityContext: null

Output of the Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-kubernetes-dashboard
  annotations:
  labels:
    app.kubernetes.io/name: kubernetes-dashboard
    helm.sh/chart: kubernetes-dashboard-5.1.1
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "2.4.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: kubernetes-dashboard
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
  selector:
    matchLabels:

      app.kubernetes.io/name: kubernetes-dashboard
      app.kubernetes.io/instance: RELEASE-NAME
      app.kubernetes.io/component: kubernetes-dashboard
  template:
    metadata:
      annotations:
      labels:
        app.kubernetes.io/name: kubernetes-dashboard
        helm.sh/chart: kubernetes-dashboard-5.1.1
        app.kubernetes.io/instance: RELEASE-NAME
        app.kubernetes.io/version: "2.4.0"
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: kubernetes-dashboard
    spec:
      serviceAccountName: RELEASE-NAME-kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: "kubernetesui/dashboard:v2.4.0"
        imagePullPolicy: IfNotPresent
        args:
          - --namespace=default
          - --auto-generate-certificates
          - --metrics-provider=none
        ports:
        - name: https
          containerPort: 8443
          protocol: TCP
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
        resources:

          limits:
            cpu: 2
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: RELEASE-NAME-kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}

5. disable globally securityContext and containerSecurityContext but set metrics containerSecurityContext in custom values-test.yaml:

securityContext: null
containerSecurityContext: null
metricsScraper:
  enabled: true
  containerSecurityContext:
    allowPrivilegeEscalation: false
    readOnlyRootFilesystem: true
    runAsUser: 1002 # value for separation
    runAsGroup: 2002 # value for separation

Output of the Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-kubernetes-dashboard
  annotations:
  labels:
    app.kubernetes.io/name: kubernetes-dashboard
    helm.sh/chart: kubernetes-dashboard-5.1.1
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "2.4.0"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: kubernetes-dashboard
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
  selector:
    matchLabels:

      app.kubernetes.io/name: kubernetes-dashboard
      app.kubernetes.io/instance: RELEASE-NAME
      app.kubernetes.io/component: kubernetes-dashboard
  template:
    metadata:
      annotations:
      labels:
        app.kubernetes.io/name: kubernetes-dashboard
        helm.sh/chart: kubernetes-dashboard-5.1.1
        app.kubernetes.io/instance: RELEASE-NAME
        app.kubernetes.io/version: "2.4.0"
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: kubernetes-dashboard
    spec:
      serviceAccountName: RELEASE-NAME-kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: "kubernetesui/dashboard:v2.4.0"
        imagePullPolicy: IfNotPresent
        args:
          - --namespace=default
          - --auto-generate-certificates
          - --sidecar-host=http://127.0.0.1:8000
        ports:
        - name: https
          containerPort: 8443
          protocol: TCP
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
        resources:

          limits:
            cpu: 2
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
      - name: dashboard-metrics-scraper
        image: "kubernetesui/metrics-scraper:v1.0.7"
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 8000
            protocol: TCP
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 8000
          initialDelaySeconds: 30
          timeoutSeconds: 30
        volumeMounts:
        - mountPath: /tmp
          name: tmp-volume
        securityContext:

          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsGroup: 2002
          runAsUser: 1002
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: RELEASE-NAME-kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}

I hope this implementation is ok for you guys :)
greez

@janlauber janlauber changed the title Add docs to disable securityContext Change containerSecurityContext rendering and add docs Jan 13, 2022
@floreks
Copy link
Member

floreks commented Jan 13, 2022

If there are no more concerns from @desaintmartin then this LGTM and we can merge.

@desaintmartin
Copy link
Member

/lgtm
/hold cancel
let's accept this minor typo. ;)

@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Jan 14, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: desaintmartin, janlauber

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit fab317c into kubernetes:master Jan 14, 2022
@janlauber
Copy link
Contributor Author

@desaintmartin & @floreks
Thank you very much for merging and your work!

Greez

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants