Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fixup #342] log-size-global-max Should Be log-global-size-max #344

Merged
merged 1 commit into from Jun 16, 2022

Conversation

hswong3i
Copy link
Contributor

@hswong3i hswong3i commented Jun 16, 2022

#342 add support for log-global-size-max for cri-o/cri-o@a4080bb, but incorrectly name the CLI option as log-size-global-max.

Sample error message with cilium postStop and preStart:

[root@cp-nightsky ~]# kubectl -n kube-system describe pod cilium-rhvxd 
Name:                 cilium-rhvxd
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 cp-nightsky/147.8.202.52
Start Time:           Thu, 16 Jun 2022 01:26:46 +0000
Labels:               controller-revision-hash=b7889fb75
                      k8s-app=cilium
                      pod-template-generation=6
Annotations:          scheduler.alpha.kubernetes.io/critical-pod: 
Status:               Running
IP:                   147.8.202.52
IPs:
  IP:           147.8.202.52
Controlled By:  DaemonSet/cilium
Init Containers:
  mount-cgroup:
    Container ID:  cri-o://d10a20eb758aeb74178bf5b48203976e3c85b14e46e820924ff083dcc638c27e
    Image:         quay.io/cilium/cilium:v1.11.6
    Image ID:      quay.io/cilium/cilium@sha256:c52dbdf7a07cc7ad5e73eecf1ac79e5191d6ec57350c10ef536267c620f9c7f6
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -ec
      cp /usr/bin/cilium-mount /hostbin/cilium-mount;
      nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
      rm /hostbin/cilium-mount
      
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 16 Jun 2022 01:26:47 +0000
      Finished:     Thu, 16 Jun 2022 01:26:47 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      CGROUP_ROOT:  /run/cilium/cgroupv2
      BIN_PATH:     /usr/libexec/cni
    Mounts:
      /hostbin from cni-path (rw)
      /hostproc from hostproc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-842gj (ro)
  wait-for-node-init:
    Container ID:  cri-o://cda257c09ae543b0b2dbc863fd73453c9c1ac17dbbcf51ab3b6f965087a3bb0e
    Image:         quay.io/cilium/cilium:v1.11.6
    Image ID:      quay.io/cilium/cilium@sha256:c52dbdf7a07cc7ad5e73eecf1ac79e5191d6ec57350c10ef536267c620f9c7f6
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      until test -s "/tmp/cilium-bootstrap.d/cilium-bootstrap-time"; do
        echo "Waiting on node-init to run...";
        sleep 1;
      done
      
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 16 Jun 2022 01:26:48 +0000
      Finished:     Thu, 16 Jun 2022 01:26:48 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /tmp/cilium-bootstrap.d from cilium-bootstrap-file-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-842gj (ro)
  clean-cilium-state:
    Container ID:  cri-o://e0a6b1a59fc9a64ca3774bb197c2d1da7ddfc63bbed238e96e055b4e78a15bab
    Image:         quay.io/cilium/cilium:v1.11.6
    Image ID:      quay.io/cilium/cilium@sha256:c52dbdf7a07cc7ad5e73eecf1ac79e5191d6ec57350c10ef536267c620f9c7f6
    Port:          <none>
    Host Port:     <none>
    Command:
      /init-container.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 16 Jun 2022 01:26:49 +0000
      Finished:     Thu, 16 Jun 2022 01:26:49 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     100m
      memory:  100Mi
    Environment:
      CILIUM_ALL_STATE:  <set to the key 'clean-cilium-state' of config map 'cilium-config'>      Optional: true
      CILIUM_BPF_STATE:  <set to the key 'clean-cilium-bpf-state' of config map 'cilium-config'>  Optional: true
    Mounts:
      /run/cilium/cgroupv2 from cilium-cgroup (rw)
      /sys/fs/bpf from bpf-maps (rw)
      /var/run/cilium from cilium-run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-842gj (ro)
Containers:
  cilium-agent:
    Container ID:  cri-o://8585d97605e51d28f84ad84399cb49f6fc372d2591a2992483c2666963fc86e4
    Image:         quay.io/cilium/cilium:v1.11.6
    Image ID:      quay.io/cilium/cilium@sha256:c52dbdf7a07cc7ad5e73eecf1ac79e5191d6ec57350c10ef536267c620f9c7f6
    Port:          <none>
    Host Port:     <none>
    Command:
      cilium-agent
    Args:
      --config-dir=/tmp/cilium/config-map
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 16 Jun 2022 01:53:07 +0000
      Finished:     Thu, 16 Jun 2022 01:53:09 +0000
    Ready:          False
    Restart Count:  10
    Liveness:       http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=10
    Readiness:      http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=3
    Startup:        http-get http://127.0.0.1:9879/healthz delay=0s timeout=1s period=2s #success=1 #failure=105
    Environment:
      K8S_NODE_NAME:               (v1:spec.nodeName)
      CILIUM_K8S_NAMESPACE:       kube-system (v1:metadata.namespace)
      CILIUM_CLUSTERMESH_CONFIG:  /var/lib/cilium/clustermesh/
      CILIUM_CNI_CHAINING_MODE:   <set to the key 'cni-chaining-mode' of config map 'cilium-config'>  Optional: true
      CILIUM_CUSTOM_CNI_CONF:     <set to the key 'custom-cni-conf' of config map 'cilium-config'>    Optional: true
    Mounts:
      /host/etc/cni/net.d from etc-cni-netd (rw)
      /host/opt/cni/bin from cni-path (rw)
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/bpf from bpf-maps (rw)
      /tmp/cilium/config-map from cilium-config-path (ro)
      /var/lib/cilium/clustermesh from clustermesh-secrets (ro)
      /var/run/cilium from cilium-run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-842gj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  cilium-run:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/cilium
    HostPathType:  DirectoryOrCreate
  bpf-maps:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/bpf
    HostPathType:  DirectoryOrCreate
  hostproc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  Directory
  cilium-cgroup:
    Type:          HostPath (bare host directory volume)
    Path:          /run/cilium/cgroupv2
    HostPathType:  DirectoryOrCreate
  cni-path:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/cni
    HostPathType:  DirectoryOrCreate
  etc-cni-netd:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  DirectoryOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  cilium-bootstrap-file-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/cilium-bootstrap.d
    HostPathType:  DirectoryOrCreate
  clustermesh-secrets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cilium-clustermesh
    Optional:    true
  cilium-config-path:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      cilium-config
    Optional:  false
  kube-api-access-842gj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason               Age                From               Message
  ----     ------               ----               ----               -------
  Normal   Pulled               31m                kubelet            Container image "quay.io/cilium/cilium:v1.11.6" already present on machine
  Normal   Scheduled            31m                default-scheduler  Successfully assigned kube-system/cilium-rhvxd to cp-nightsky
  Normal   Started              31m                kubelet            Started container mount-cgroup
  Normal   Created              31m                kubelet            Created container mount-cgroup
  Normal   Started              31m                kubelet            Started container wait-for-node-init
  Normal   Pulled               31m                kubelet            Container image "quay.io/cilium/cilium:v1.11.6" already present on machine
  Normal   Created              31m                kubelet            Created container wait-for-node-init
  Normal   Started              31m                kubelet            Started container clean-cilium-state
  Normal   Pulled               31m                kubelet            Container image "quay.io/cilium/cilium:v1.11.6" already present on machine
  Normal   Created              31m                kubelet            Created container clean-cilium-state
  Normal   Created              31m (x2 over 31m)  kubelet            Created container cilium-agent
  Normal   Started              31m (x2 over 31m)  kubelet            Started container cilium-agent
  Warning  FailedPostStartHook  31m (x2 over 31m)  kubelet            Exec lifecycle hook ([/cni-install.sh --enable-debug=false --cni-exclusive=true]) for Container "cilium-agent" in Pod "cilium-rhvxd_kube-system(9bb5a031-8999-4560-8621-6add93a24a07)" failed - error: rpc error: code = Unknown desc = command error: EOF, stdout: conmon: option parsing failed: Unknown option --log-global-size-max
, stderr: , exit code -1, message: ""
  Normal   Killing            31m (x2 over 31m)  kubelet  FailedPostStartHook
  Warning  FailedPreStopHook  31m (x2 over 31m)  kubelet  Exec lifecycle hook ([/cni-uninstall.sh]) for Container "cilium-agent" in Pod "cilium-rhvxd_kube-system(9bb5a031-8999-4560-8621-6add93a24a07)" failed - error: rpc error: code = Unknown desc = command error: EOF, stdout: conmon: option parsing failed: Unknown option --log-global-size-max
, stderr: , exit code -1, message: ""
  Normal   Pulled   31m (x3 over 31m)    kubelet  Container image "quay.io/cilium/cilium:v1.11.6" already present on machine
  Warning  BackOff  91s (x174 over 31m)  kubelet  Back-off restarting failed container

[root@cp-nightsky ~]# conmon --help | grep log
  --log-level                  Print debug logs based on log level
  -l, --log-path               Log file path
  --log-size-max               Maximum size of log file
  --log-size-global-max        Maximum size of all log files
  --log-tag                    Additional tag to use for logging
  --no-sync-log                Do not manually call sync on logs after container shutdown
  --syslog                     Log to syslog (use with cgroupfs cgroup manager)
  
[root@cp-nightsky ~]# conmon --version
conmon version 2.1.2
commit: 99eac3e82289c18465adeab5c522469ad14e5725

Dirty workaround applied for my conmon packaging with OBS, and solve above issue:

Signed-off-by: Wong Hoi Sing Edison hswong3i@pantarei-design.com

…e-max`

containers#342 add support for
`log-global-size-max` for
cri-o/cri-o@a4080bb,
but incorrectly name the CLI option as `log-size-global-max`.

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
@hswong3i hswong3i changed the title [Fixup #342] log-size-global-max Should Belog-global-size-max [Fixup #342] log-size-global-max Should Be log-global-size-max Jun 16, 2022
@haircommander
Copy link
Collaborator

that is embarrassing... thank you @hswong3i

@haircommander haircommander merged commit 2bc95ee into containers:main Jun 16, 2022
9 of 13 checks passed
@haircommander
Copy link
Collaborator

FYI I've retagged 2.1.2 so you can remove the hotfix

@hswong3i
Copy link
Contributor Author

@haircommander thank you very much, my conmon packaging for v2.1.2 also rebase accordingly:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants