Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

calico-kube-controllers: Run as non-root by default for the s390x image #7955

Merged
merged 1 commit into from Aug 29, 2023

Conversation

liudalibj
Copy link
Contributor

@liudalibj liudalibj commented Aug 24, 2023

Description

Run as non-root by default for the s390x image

  • create status and profiles folder
  • create related files and chown to 999

Related issues/PRs

fixes: #7957

Todos

  • Tests
  • Documentation
  • Release note

Release Note

kube controllers run as a non-root user in s390x builds by default

Reminder for the reviewer

Make sure that this PR has the correct labels and milestone set.

Every PR needs one docs-* label.

  • docs-pr-required: This change requires a change to the documentation that has not been completed yet.
  • docs-completed: This change has all necessary documentation completed.
  • docs-not-required: This change has no user-facing impact and requires no docs.

Every PR needs one release-note-* label.

  • release-note-required: This PR has user-facing changes. Most PRs should have this label.
  • release-note-not-required: This PR has no user-facing changes.

Other optional labels:

  • cherry-pick-candidate: This PR should be cherry-picked to an earlier release. For bug fixes only.
  • needs-operator-pr: This PR is related to install and requires a corresponding change to the operator.

@liudalibj liudalibj requested a review from a team as a code owner August 24, 2023 05:42
@marvin-tigera marvin-tigera added this to the Calico v3.27.0 milestone Aug 24, 2023
@marvin-tigera marvin-tigera added release-note-required Change has user-facing impact (no matter how small) docs-pr-required Change is not yet documented labels Aug 24, 2023
@CLAassistant
Copy link

CLAassistant commented Aug 24, 2023

CLA assistant check
All committers have signed the CLA.

@liudalibj
Copy link
Contributor Author

tested on s390x cluster as follow:

  • build s390x docker image with this change
cd /root/calico/kube-controllers
ARCH=s390x make build
ARCH=s390x make image
docker tag kube-controllers:latest-s390x liudali/kube-controllers:latest-s390x
docker push liudali/kube-controllers:latest-s390x
  • test the image on one cluster with s390x worker node
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: calico-kube-controllers-verify
  name: calico-kube-controllers-verify
  namespace: kube-system
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers-verify
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: calico-kube-controllers-verify
      name: calico-kube-controllers-verify
      namespace: kube-system
    spec:
      containers:
      - env:
        - name: ENABLED_CONTROLLERS
          value: node
        - name: DATASTORE_TYPE
          value: kubernetes
        image: liudali/kube-controllers:latest-s390x
        imagePullPolicy: Always
        livenessProbe:
          exec:
            command:
            - /usr/bin/check-status
            - -l
          failureThreshold: 6
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 10
        name: calico-kube-controllers-verify
        readinessProbe:
          exec:
            command:
            - /usr/bin/check-status
            - -r
          failureThreshold: 3
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            memory: 3Gi
          requests:
            cpu: 10m
            memory: 25Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          runAsGroup: 0
          runAsNonRoot: true
          runAsUser: 999
          seccompProfile:
            type: RuntimeDefault
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      priorityClassName: system-cluster-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: calico-kube-controllers
      serviceAccountName: calico-kube-controllers
      terminationGracePeriodSeconds: 30
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
kubectl create -f calico-kube-controllers-verify.yaml
deployment.apps/calico-kube-controllers-verify created
 kubectl describe po -n kube-system calico-kube-controllers-verify
Name:                 calico-kube-controllers-verify-89cfd964d-zdx6x
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      calico-kube-controllers
Node:                 10.242.0.53/10.242.0.53
Start Time:           Wed, 23 Aug 2023 23:13:24 -0700
Labels:               k8s-app=calico-kube-controllers-verify
                      pod-template-hash=89cfd964d
Annotations:          cni.projectcalico.org/containerID: 9b26290825748fa6552e9a25a20df79360f7322df6f08b617a7cfc8b830c877c
                      cni.projectcalico.org/podIP: 172.17.114.215/32
                      cni.projectcalico.org/podIPs: 172.17.114.215/32
Status:               Running
IP:                   172.17.114.215
IPs:
  IP:           172.17.114.215
Controlled By:  ReplicaSet/calico-kube-controllers-verify-89cfd964d
Containers:
  calico-kube-controllers-verify:
    Container ID:   containerd://0e2d032e711e944d0a6d30549184e49d49874f7966819f4b8d2961ea500a4a7d
    Image:          liudali/kube-controllers:latest-s390x
    Image ID:       docker.io/liudali/kube-controllers@sha256:c33acc653c0a8bb96f595b3586b11ea9ac0f10e4c29b16fc87bcd71000c76d42
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 23 Aug 2023 23:13:27 -0700
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  3Gi
    Requests:
      cpu:      10m
      memory:   25Mi
    Liveness:   exec [/usr/bin/check-status -l] delay=10s timeout=10s period=10s #success=1 #failure=6
    Readiness:  exec [/usr/bin/check-status -r] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      ENABLED_CONTROLLERS:  node
      DATASTORE_TYPE:       kubernetes
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-chm9t (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-chm9t:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 CriticalAddonsOnly op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 600s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 600s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  75s   default-scheduler  Successfully assigned kube-system/calico-kube-controllers-verify-89cfd964d-zdx6x to 10.242.0.53
  Normal  Pulling    73s   kubelet            Pulling image "liudali/kube-controllers:latest-s390x"
  Normal  Pulled     72s   kubelet            Successfully pulled image "liudali/kube-controllers:latest-s390x" in 680.741714ms (680.755137ms including waiting)
  Normal  Created    72s   kubelet            Created container calico-kube-controllers-verify
  Normal  Started    72s   kubelet            Started container calico-kube-controllers-verify

Copy link
Contributor

@mgleung mgleung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me. Thanks for bringing the changes from our amd64 builds over!

@mgleung
Copy link
Contributor

mgleung commented Aug 24, 2023

/sem-approve

@liudalibj
Copy link
Contributor Author

liudalibj commented Aug 25, 2023

@mgleung thanks for take a look on this PR.
I want to check the failed job but it hang on loading page https://tigera.semaphoreci.com/jobs/6e2737fb-2631-47c7-999a-043e32b308a2
would you like help to paste/check the details error?

@lwr20
Copy link
Member

lwr20 commented Aug 25, 2023

Tail end of log from: https://tigera.semaphoreci.com/jobs/6e2737fb-2631-47c7-999a-043e32b308a2

�[34m => [internal] load build definition from Dockerfile.s390x                 0.0s
�[0m�[34m => => transferring dockerfile: 1.46kB                                     0.0s
�[0m�[34m => [internal] load .dockerignore                                          0.0s
�[0m�[34m => => transferring context: 2B                                            0.0s
�[0m�[34m => [internal] load metadata for registry.access.redhat.com/ubi8/ubi-mini  1.0s
�[0m�[34m => [ubi 1/7] FROM registry.access.redhat.com/ubi8/ubi-minimal:8.7@sha256  1.5s
�[0m�[34m => => resolve registry.access.redhat.com/ubi8/ubi-minimal:8.7@sha256:987  0.0s
�[0m�[34m => => sha256:987ae81ce046652ee4a2c3df54dad5e82faa1b078da 1.47kB / 1.47kB  0.0s
�[0m�[34m => => sha256:bae464b2f4a68b06a8340b138f21ee8204b7021105ac8ac 429B / 429B  0.0s
�[0m�[34m => => sha256:8cc55f9bbf589b8a128c1145f6c6b3e89ae111da37d 6.25kB / 6.25kB  0.0s
�[0m�[34m => => sha256:fd555f115f1f8370328b19ded057891778434997f 37.51MB / 37.51MB  0.7s
�[0m�[34m => => extracting sha256:fd555f115f1f8370328b19ded057891778434997fc4f6371  0.7s
�[0m�[34m => [internal] load build context                                          0.5s
�[0m�[34m => => transferring context: 74.60MB                                       0.5s
�[0m�[31m => ERROR [ubi 2/7] RUN mkdir /licenses                                    0.2s
�[0m�[?25h------
 > [ubi 2/7] RUN mkdir /licenses:
0.136 exec /bin/sh: no such file or directory
------
Dockerfile.s390x:20
--------------------
  18 |     
  19 |     # Add in top-level license file
  20 | >>> RUN mkdir /licenses
  21 |     COPY LICENSE /licenses
  22 |     
--------------------
ERROR: failed to solve: process "/bin/sh -c mkdir /licenses" did not complete successfully: exit code: 1
make[1]: *** [Makefile:110: image.created-s390x] Error 1
make[1]: Leaving directory '/home/semaphore/calico/kube-controllers'
make: *** [Makefile:103: sub-image-s390x] Error 2

@mgleung
Copy link
Contributor

mgleung commented Aug 25, 2023

/sem-approve

- create status and profiles folder
- create related files and chown to 999

Signed-off-by: Da Li Liu <liudali@cn.ibm.com>
@liudalibj
Copy link
Contributor Author

liudalibj commented Aug 28, 2023

Thanks @lwr20 for helping to provide the details error logs:

Tail end of log from: https://tigera.semaphoreci.com/jobs/6e2737fb-2631-47c7-999a-043e32b308a2

�[34m => [internal] load build definition from Dockerfile.s390x                 0.0s
�[0m�[34m => => transferring dockerfile: 1.46kB                                     0.0s
�[0m�[34m => [internal] load .dockerignore                                          0.0s
�[0m�[34m => => transferring context: 2B                                            0.0s
�[0m�[34m => [internal] load metadata for registry.access.redhat.com/ubi8/ubi-mini  1.0s
�[0m�[34m => [ubi 1/7] FROM registry.access.redhat.com/ubi8/ubi-minimal:8.7@sha256  1.5s
�[0m�[34m => => resolve registry.access.redhat.com/ubi8/ubi-minimal:8.7@sha256:987  0.0s
�[0m�[34m => => sha256:987ae81ce046652ee4a2c3df54dad5e82faa1b078da 1.47kB / 1.47kB  0.0s
�[0m�[34m => => sha256:bae464b2f4a68b06a8340b138f21ee8204b7021105ac8ac 429B / 429B  0.0s
�[0m�[34m => => sha256:8cc55f9bbf589b8a128c1145f6c6b3e89ae111da37d 6.25kB / 6.25kB  0.0s
�[0m�[34m => => sha256:fd555f115f1f8370328b19ded057891778434997f 37.51MB / 37.51MB  0.7s
�[0m�[34m => => extracting sha256:fd555f115f1f8370328b19ded057891778434997fc4f6371  0.7s
�[0m�[34m => [internal] load build context                                          0.5s
�[0m�[34m => => transferring context: 74.60MB                                       0.5s
�[0m�[31m => ERROR [ubi 2/7] RUN mkdir /licenses                                    0.2s
�[0m�[?25h------
 > [ubi 2/7] RUN mkdir /licenses:
0.136 exec /bin/sh: no such file or directory
------
Dockerfile.s390x:20
--------------------
  18 |     
  19 |     # Add in top-level license file
  20 | >>> RUN mkdir /licenses
  21 |     COPY LICENSE /licenses
  22 |     
--------------------
ERROR: failed to solve: process "/bin/sh -c mkdir /licenses" did not complete successfully: exit code: 1
make[1]: *** [Makefile:110: image.created-s390x] Error 1
make[1]: Leaving directory '/home/semaphore/calico/kube-controllers'
make: *** [Makefile:103: sub-image-s390x] Error 2

this issue should be fixed by update the Dockerfile.s390x, which is same as the one for apiserver https://github.com/projectcalico/calico/blob/master/apiserver/docker-image/Dockerfile.s390x#L7

COPY --from=qemu /usr/bin/qemu-s390x-static /usr/bin/

@liudalibj
Copy link
Contributor Author

@mgleung @lwr20 would you like help to trigger the pr build again? Thanks!

@lwr20
Copy link
Member

lwr20 commented Aug 29, 2023

/sem-approve

@liudalibj
Copy link
Contributor Author

@lwr20 thanks for trigger the pr build again, the build is passed, can you help to merge it?

@lwr20 lwr20 merged commit b494be8 into projectcalico:master Aug 29, 2023
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs-not-required Docs not required for this change release-note-required Change has user-facing impact (no matter how small)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

The strange calico-kube-controller v3.26.1 behaviour in a cluster with s390x worker nodes.
5 participants