Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-proxy pod fails to run due to unmounted volumes #4864

Closed
xhejtman opened this issue Oct 9, 2023 · 50 comments
Closed

kube-proxy pod fails to run due to unmounted volumes #4864

xhejtman opened this issue Oct 9, 2023 · 50 comments
Assignees

Comments

@xhejtman
Copy link

xhejtman commented Oct 9, 2023

Environmental Info:
RKE2 Version: 1.26.9+rke2r1

Node(s) CPU architecture, OS, and Version:
Linux kub-b9.priv.cerit-sc.cz 6.2.0-33-generic #33~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Sep 7 10:33:52 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration:
5 servers, 30 agetns

Describe the bug:
Sometimes it happens that kube-proxy is unable to start. I can see that sandbox should be present:

root@kub-b9:~# CONTAINER_RUNTIME_ENDPOINT="unix:///var/run/k3s/containerd/containerd.sock" crictl pods -o yaml --label io.kubernetes.pod.name=kube-proxy-kub-b9.priv.cerit-sc.cz
items:
- annotations:
    kubernetes.io/config.hash: ce051fc91f9e463593a1d45efa60be52
    kubernetes.io/config.seen: "2023-10-03T14:17:50.457343231+02:00"
    kubernetes.io/config.source: file
  createdAt: "1696336132298880783"
  id: e505595e4af3ccb8e46c9dda0a79ce997b89c33529600708a389b7874b386ad3
  labels:
    component: kube-proxy
    io.kubernetes.pod.name: kube-proxy-kub-b9.priv.cerit-sc.cz
    io.kubernetes.pod.namespace: kube-system
    io.kubernetes.pod.uid: ce051fc91f9e463593a1d45efa60be52
    tier: control-plane
  metadata:
    attempt: 1
    name: kube-proxy-kub-b9.priv.cerit-sc.cz
    namespace: kube-system
    uid: ce051fc91f9e463593a1d45efa60be52
  runtimeHandler: ""
  state: SANDBOX_READY

root@kub-b9:~#

However, it is not actually running as it cannot be stopped:

oot@kub-b9:~# CONTAINER_RUNTIME_ENDPOINT="unix:///var/run/k3s/containerd/containerd.sock" crictl stop e505595e4af3c
E1005 20:28:29.770933 1411714 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e505595e4af3c\": not found" containerID="e505595e4af3c"
FATA[0000] stopping the container "e505595e4af3c": rpc error: code = NotFound desc = an error occurred when try to find container "e505595e4af3c": not found
root@kub-b9:~#

It can be force removed:

root@kub-b9:~# CONTAINER_RUNTIME_ENDPOINT="unix:///var/run/k3s/containerd/containerd.sock" crictl rmp -f e505595e4af3c
Stopped sandbox e505595e4af3c
Removed sandbox e505595e4af3c
root@kub-b9:~#

However, it does not run again. service rke2-agent restart does not spawn the kube-proxy pod again.

root@kub-b9:~# CONTAINER_RUNTIME_ENDPOINT="unix:///var/run/k3s/containerd/containerd.sock" crictl pods -o yaml --label io.kubernetes.pod.name=kube-proxy-kub-b9.priv.cerit-sc.cz
items: []

root@kub-b9:~#
root@kub-b9:~# CONTAINER_RUNTIME_ENDPOINT="unix:///var/run/k3s/containerd/containerd.sock" crictl ps -a | grep kube-proxy
root@kub-b9:~#

kubelet log is attached.

kubelet.log.gz

@brandond
Copy link
Contributor

brandond commented Oct 9, 2023

However, it is not actually running as it cannot be stopped:

You're correctly using crictl pods and crictl rmp to list and delete the pod, but crictl stop takes a container ID, not a pod sandbox ID. Are you looking for crictl stopp?

The kubelet log indicates that it's unable to complete a syncronization of pods, either from the apiserver, or the static pod manifests. Note that the seenSources map is empty:

I1005 20:37:54.212933 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:54.312994 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:54.412643 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:54.512747 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:54.613093 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:54.712862 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:54.813317 1424921 generic.go:222] "GenericPLEG: Relisting"
I1005 20:37:54.813374 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:54.913133 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:55.012751 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:55.112466 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:55.213111 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:55.312780 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:55.413237 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
E1005 20:37:55.468783 1424921 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
I1005 20:37:55.516255 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:55.612449 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:55.712897 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:55.813073 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:55.817338 1424921 generic.go:222] "GenericPLEG: Relisting"
I1005 20:37:55.912480 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:56.013898 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:56.112502 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:56.212781 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:56.313332 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:56.412827 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:56.512568 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:56.613159 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:56.712676 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:56.813121 1424921 config.go:105] "Looking for sources, have seen" sources=[api file] seenSources=map[]
I1005 20:37:56.824273 1424921 generic.go:222] "GenericPLEG: Relisting"

It doesn't ever seem to complete synchronizing the pod sources, it doesn't know what to create or delete.

Are you able to reproduce this if you don't change the cpu manager policy, but just run with the default? I see it changing several times in the logs.

Can you provide an updated copy of your config.yaml, as well as your containerd.config.toml if you're customizing the containerd config?

@xhejtman
Copy link
Author

kubelet.log.gz
config.yaml.gz
config.toml.gz

uploaded all and fresh kubelet.log.

the node kub-b9 is still broken, so I can test everything.

@jhoelzel
Copy link

i had something like this happen this morning with a 12 node cluster. Turns out that i had set the event-config according to the CIS guide and my misconfigured longhorn deployment was spamming the scheduler and therefore it never got to the fact to restart it. (at least thats my guess)

After i removed the longhorn deployment and deleted the proxies i was able to get everything up and running by restarting the rke2 service through node shell.

Can you relate to that? you would see something spam in the apiserver logs on the master nodes.

@xhejtman
Copy link
Author

I do not have longhorn installed.

@brandond
Copy link
Contributor

I have been doing some tracing of this issue, and it appears that there is a race condition in the kubelet startup, where it will fail to start a static pod if the static pod's mirror pod is discovered from the apiserver at the same time volumes are being mounted. I don't have a good fix for it at the moment, other than removing the kube-proxy static pod from disk prior to starting rke2 - this seems to delay the static pod creation long enough to avoid triggering the race condition. Unfortunately I haven't been able to reproduce the issue myself, so I am relying on some teammates to provide logs from an environment where it does seem to reoccur intermittently.

@xhejtman
Copy link
Author

@brandond as one of our nodes is still in this state - kube-proxy does not start - I can try/trace anything needed here (I could provide you even remote ssh access).

@brandond
Copy link
Contributor

Can you try deleting /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml, and then restarting the service?

@xhejtman
Copy link
Author

Yes, this helps on a different node, but not on kub-b9, I noticed this, that if I removed the manifest, kube proxy starts fine on other nodes, I thought it is because a new UID is generated..

Did you mean 'remove from disk' removing content from containerd or the abovementioned manifest? I could try the first, but if it fixes the node, we lose the testing case..

@brandond
Copy link
Contributor

Just do exactly as I said and delete the file. If that does not resolve it, you may try also running rke2-killall.sh after deleting the file, although you should note that this will also terminate any other pods on this node.

@xhejtman
Copy link
Author

root@kub-b9:~# rm /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml
root@kub-b9:~# service rke2-agent restart
root@kub-b9:~# ps fax | grep kube-proxy
2483805 pts/0    S+     0:00          \_ grep --color=auto kube-proxy
   8871 ?        Ssl    1:16  \_ pushprox-client --fqdn=10.16.62.44 --proxy-url=http://pushprox-kube-proxy-proxy.cattle-monitoring-system.svc:8080 --allow-port=10249 --use-localhost
2255256 ?        Ssl    2:08  \_ pushprox-client --fqdn=10.16.62.44 --proxy-url=http://pushprox-kube-proxy-proxy.cattle-monitoring-system.svc:8080 --allow-port=10249 --use-localhost
root@kub-b9:~#

even after few minutes, kube-proxy did not start.

after rke2-killall.sh and service rke2-agent start, the kube-proxy finally started.

@jhoelzel
Copy link

I have been doing some tracing of this issue, and it appears that there is a race condition in the kubelet startup, where it will fail to start a static pod if the static pod's mirror pod is discovered from the apiserver at the same time volumes are being mounted. I don't have a good fix for it at the moment, other than removing the kube-proxy static pod from disk prior to starting rke2 - this seems to delay the static pod creation long enough to avoid triggering the race condition. Unfortunately I haven't been able to reproduce the issue myself, so I am relying on some teammates to provide logs from an environment where it does seem to reoccur intermittently.

Im pretty sure that if your event-config is setup to be super low and you have a daemonset failing rapidly this happens by itself after 24h. There was nothing else "wrong" with my cluster.

As for the solution for me:

  • delete the pod with kubectl from the node
  • restart the rke2 service through cli

@brandond
Copy link
Contributor

brandond commented Oct 17, 2023

@jhoelzel
Im pretty sure that if your event-config is setup to be super low and you have a daemonset failing rapidly this happens by itself after 24h.

I'm sorry, you're saying that for some reason the kube-proxy static pod wasn't running because of an issue with an unrelated daemonset? Or are you talking about one of your daemonset pods not running?

@jhoelzel
Copy link

jhoelzel commented Oct 17, 2023

Im sorry if i did not make myself clear, but what happend for me was that the apiserver was spammed and full of logs of that daemonset (longhorn) and that i had the exact same problem (kube-proxy failed, not comming back up) with the same kubernetes version as OP.

My current guess is that it has something to do with the admission config, more specifically the event limit that comes with cis-conformity.

Or in short apiserver spammed kube-proxy goes pufff and does not come back

Further info:

  • rebooting the node did nothing
  • just deleting the kube-proxy did nothing
  • only the combination of deleting the pod through kubectl and then restarting the service worked
  • but only after i removed my problem daemonset from the apiserver

after that everything just worked without issue while i was trying before to restart the server (with the reboot too) and nothing was running.

@jonasgroenke
Copy link

Hey,
this also happens on a fresh install, at least for me.
Deleting the file was not working.
Let me know if any log files will help you.

I'm new to rancher and it was not a really smooth experience to have such issues.
Is there any stable version, where this problem does not exist?

@xhejtman
Copy link
Author

Hey, this also happens on a fresh install, at least for me. Deleting the file was not working. Let me know if any log files will help you.

I'm new to rancher and it was not a really smooth experience to have such issues. Is there any stable version, where this problem does not exist?

what rke2 version do you use?

@jonasgroenke
Copy link

I'm using v1.26.9+rke2r1 on an ubuntu 22.04 cloud image.
Basically I'm just following the quick start guide and end up without a kube-proxy and the install-cni pod is failing because it could not connect to the apiserver.

@xhejtman
Copy link
Author

I'm using v1.26.9+rke2r1 on an ubuntu 22.04 cloud image. Basically I'm just following the quick start guide and end up without a kube-proxy and the install-cni pod is failing because it could not connect to the apiserver.

version 1.26.7+rke2r1 should work fine. But remove everything from /var/lib/rancher/rke2/agent/containerd/ if this is fresh install. install-cni cannot connect because the kube-proxy is not running.

and perhaps you can remove the /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml and run rke2-killall.sh and restart rke, this could fix start as well.

@brandond
Copy link
Contributor

brandond commented Oct 20, 2023

@TravDev01 I have never seen this issue on a fresh install. I have seen what LOOKS like a similar issue on nodes that do not have sufficient CPU or memory resources available - the control-plane fails to start completely due to lack of resources, which blocks kube-proxy from running. Can you provide the rke2-server log from journald, as well as information on the core count and memory size on your node?

@jonasgroenke
Copy link

@TravDev01 I have never seen this issue on a fresh install. I have seen what LOOKS like a similar issue on nodes that do not have sufficient CPU or memory resources available - the control-plane fails to start completely due to lack of resources, which blocks kube-proxy from running. Can you provide the rke2-server log from journald, as well as information on the core count and memory size on your node?

Oh wow.
You are absolutely right.

I double-checked my resource configuration and noticed that proxmox was resetting it when I restored my snapshot between my various tests.
With the correct configuration everything is working, also with the lastet stable version.

Might it be beneficial to have a warning message in the log during the first bootstrap, when a user tries to create a cluster with fewer than the recommended resources?
For me, it's an interesting behavior that the pod is not even deployed. Normally, when you have a resource problem, you will see the pod in Kubernetes, but it stays in a pending state, because there is no node that is capable of running that pod.

Thanks a lot!

@brandond
Copy link
Contributor

I don't know they there is an issue tracking this upstream, but the kubelet doesn't give any feedback if static pods fail to schedule. You have to turn the log up to like v=4 or higher if I remember correctly, and then there is a message complaining about scheduling and resources.

@jhoelzel
Copy link

jhoelzel commented Oct 23, 2023

@TravDev01 I have never seen this issue on a fresh install. I have seen what LOOKS like a similar issue on nodes that do not have sufficient CPU or memory resources available - the control-plane fails to start completely due to lack of resources, which blocks kube-proxy from running. Can you provide the rke2-server log from journald, as well as information on the core count and memory size on your node?

Interesting! so this should solve it from the config?

kube-proxy-arg:
  cpu-request: "100m"
  cpu-limit: "200m"
  memory-request: "64Mi"
  memory-limit: "128Mi"

I just had it happen today again on 4 nodes from 12. What strikes me as odd is that i can, the first time of two delete the kube-proxy from the node without issue, but when i restart rke2, the proxy still remains in pending.
But when i try to delete it again, its stuck in "terminating".

So i killed it with:

sudo /var/lib/rancher/rke2/bin/crictl --runtime-endpoint unix:///run/k3s/containerd/containerd.sock pods -o yaml --label component=kube-proxy | grep '\bid\b' | awk -F': ' '{print $2}' | xargs -I {} sudo /var/lib/
rancher/rke2/bin/crictl --runtime-endpoint unix:///run/k3s/containerd/containerd.sock rmp -f {}

but i still have no change, its still terminating.

Further investigation:
kube-proxy logs show nothing

2023-10-16T09:01:58.986408318Z stderr F I1016 09:01:58.985714       1 server.go:224] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP"
2023-10-16T09:01:59.005822985Z stderr F I1016 09:01:59.005553       1 node.go:163] Successfully retrieved node IP: 10.10.10.7
2023-10-16T09:01:59.005877518Z stderr F I1016 09:01:59.005636       1 server_others.go:109] "Detected node IP" address="10.10.10.7"
2023-10-16T09:01:59.086419262Z stderr F I1016 09:01:59.086183       1 server_others.go:178] "Using iptables Proxier"
2023-10-16T09:01:59.086459521Z stderr F I1016 09:01:59.086245       1 server_others.go:185] "kube-proxy running in dual-stack mode" ipFamily=IPv4
2023-10-16T09:01:59.086467123Z stderr F I1016 09:01:59.086265       1 server_others.go:186] "Creating dualStackProxier for iptables"
2023-10-16T09:01:59.086473244Z stderr F I1016 09:01:59.086315       1 server_others.go:468] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
2023-10-16T09:01:59.087623335Z stderr F I1016 09:01:59.086380       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
2023-10-16T09:01:59.087864299Z stderr F I1016 09:01:59.087750       1 server.go:656] "Version info" version="v1.26.9+rke2r1"
2023-10-16T09:01:59.087881853Z stderr F I1016 09:01:59.087783       1 server.go:658] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
2023-10-16T09:01:59.108680777Z stderr F I1016 09:01:59.092717       1 config.go:317] "Starting service config controller"
2023-10-16T09:01:59.108731108Z stderr F I1016 09:01:59.092757       1 shared_informer.go:270] Waiting for caches to sync for service config
2023-10-16T09:01:59.108870026Z stderr F I1016 09:01:59.092703       1 config.go:444] "Starting node config controller"
2023-10-16T09:01:59.108951078Z stderr F I1016 09:01:59.108643       1 shared_informer.go:270] Waiting for caches to sync for node config
2023-10-16T09:01:59.1089826Z stderr F I1016 09:01:59.108867       1 config.go:226] "Starting endpoint slice config controller"
2023-10-16T09:01:59.108993238Z stderr F I1016 09:01:59.108907       1 shared_informer.go:270] Waiting for caches to sync for endpoint slice config
2023-10-16T09:01:59.193907937Z stderr F I1016 09:01:59.193666       1 shared_informer.go:277] Caches are synced for service config
2023-10-16T09:01:59.210241661Z stderr F I1016 09:01:59.209971       1 shared_informer.go:277] Caches are synced for endpoint slice config
2023-10-16T09:01:59.220358871Z stderr F I1016 09:01:59.220140       1 shared_informer.go:277] Caches are synced for node config

On the same node, in the middle of the night i interestingly have a restart:

Oct 22 23:28:26 system-b4aw0e systemd[1]: Stopping Rancher Kubernetes Engine v2 (agent)...
Oct 22 23:28:26 system-b4aw0e rke2[5896]: time="2023-10-22T23:28:26Z" level=info msg="Proxy done" err="context canceled" url="wss://10.10.10.5:9345/v1-rke2/connect"
Oct 22 23:28:26 system-b4aw0e rke2[5896]: time="2023-10-22T23:28:26Z" level=info msg="Proxy done" err="context canceled" url="wss://10.10.10.6:9345/v1-rke2/connect"
Oct 22 23:28:26 system-b4aw0e rke2[5896]: time="2023-10-22T23:28:26Z" level=info msg="Proxy done" err="context canceled" url="wss://10.10.10.12:9345/v1-rke2/connect"
Oct 22 23:28:26 system-b4aw0e rke2[5896]: time="2023-10-22T23:28:26Z" level=fatal msg="context canceled"
Oct 22 23:28:27 system-b4aw0e systemd[1]: rke2-agent.service: Main process exited, code=exited, status=1/FAILURE
Oct 22 23:28:27 system-b4aw0e systemd[1]: rke2-agent.service: Failed with result 'exit-code'.
Oct 22 23:28:27 system-b4aw0e systemd[1]: rke2-agent.service: Unit process 17277 (containerd-shim) remains running after unit stopped.
Oct 22 23:28:27 system-b4aw0e systemd[1]: Stopped Rancher Kubernetes Engine v2 (agent).
Oct 22 23:28:27 system-b4aw0e systemd[1]: rke2-agent.service: Consumed 13h 49min 13.749s CPU time.
-- Boot a2cd8601bc224340a84bc11f781558f6 --

@brandond
Copy link
Contributor

in the middle of the night i interestingly have a restart

The service was explicitly stopped, can you tell why that happened?

@jhoelzel
Copy link

jhoelzel commented Oct 23, 2023

Well nothing really i had everything up and running that night. That day i upgraded the masters for more cpu and ram so that wasnt it.

My current estimate ist etcd memory or networking pressure. Even now i only have been able to recover one node. I have two in proxy "pending" and two in proxy "terminating".

The masters are explicitly CriticalAddonsOnly=true:NoExecute though so the only thing running on them actually is rke2-server.

I have like 110 pods on those 12 servers and plenty of ram and cpus to go around for all. My current guesstimate is that i have bad neighours on my droplets. But having three of them is like winning the lottery...

On the etcd pods i find these entries:

{"level":"info","ts":"2023-10-23T15:44:16.797596Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3433480833,"revision":9874039,"compact-revision":9871647}
{"level":"info","ts":"2023-10-23T15:45:41.853504Z","caller":"traceutil/trace.go:171","msg":"trace[1788653492] transaction","detail":"{read_only:false; response_revision:9877335; number_of_response:1; }","duration":"115.733661ms","start":"2023-10-23T15:45:41.737754Z","end":"2023-10-23T15:45:41.853488Z","steps":["trace[1788653492] 'process raft request'  (duration: 115.601948ms)"],"step_count":1}
{"level":"warn","ts":"2023-10-23T15:46:18.847013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.427468ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3263292622745542147 username:\"etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/configmaps/kube-system/rke2\" mod_revision:9877616 > success:<request_put:<key:\"/registry/configmaps/kube-system/rke2\" value_size:472 >> failure:<request_range:<key:\"/registry/configmaps/kube-system/rke2\" > >>","response":"size:20"}
{"level":"info","ts":"2023-10-23T15:46:18.847194Z","caller":"traceutil/trace.go:171","msg":"trace[377274894] transaction","detail":"{read_only:false; response_revision:9877634; number_of_response:1; }","duration":"203.585119ms","start":"2023-10-23T15:46:18.643581Z","end":"2023-10-23T15:46:18.847167Z","steps":["trace[377274894] 'process raft request'  (duration: 97.902647ms)","trace[377274894] 'compare'  (duration: 105.160537ms)"],"step_count":2}
{"level":"info","ts":"2023-10-23T15:46:18.847276Z","caller":"traceutil/trace.go:171","msg":"trace[1155188009] linearizableReadLoop","detail":"{readStateIndex:10548822; appliedIndex:10548821; }","duration":"189.498296ms","start":"2023-10-23T15:46:18.65776Z","end":"2023-10-23T15:46:18.847258Z","steps":["trace[1155188009] 'read index received'  (duration: 84.719185ms)","trace[1155188009] 'applied index is now lower than readState.Index'  (duration: 104.776863ms)"],"step_count":2}
{"level":"warn","ts":"2023-10-23T15:46:18.84744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.696492ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/teleport/teleport-kube-agent-1-state\" ","response":"range_response_count:1 size:29711"}
{"level":"info","ts":"2023-10-23T15:46:18.84748Z","caller":"traceutil/trace.go:171","msg":"trace[25927053] range","detail":"{range_begin:/registry/secrets/teleport/teleport-kube-agent-1-state; range_end:; response_count:1; response_revision:9877635; }","duration":"189.737267ms","start":"2023-10-23T15:46:18.657727Z","end":"2023-10-23T15:46:18.847465Z","steps":["trace[25927053] 'agreement among raft nodes before linearized reading'  (duration: 189.651099ms)"],"step_count":1}
{"level":"warn","ts":"2023-10-23T15:46:18.848344Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.71485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:9"}
{"level":"info","ts":"2023-10-23T15:46:18.848397Z","caller":"traceutil/trace.go:171","msg":"trace[810898257] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:9877635; }","duration":"111.77626ms","start":"2023-10-23T15:46:18.736607Z","end":"2023-10-23T15:46:18.848384Z","steps":["trace[810898257] 'agreement among raft nodes before linearized reading'  (duration: 111.668402ms)"],"step_count":1}
{"level":"warn","ts":"2023-10-23T15:46:19.11453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.49656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/rke2-etcd\" ","response":"range_response_count:1 size:647"}
{"level":"info","ts":"2023-10-23T15:46:19.114619Z","caller":"traceutil/trace.go:171","msg":"trace[1923572501] range","detail":"{range_begin:/registry/configmaps/kube-system/rke2-etcd; range_end:; response_count:1; response_revision:9877641; }","duration":"155.598813ms","start":"2023-10-23T15:46:18.959003Z","end":"2023-10-23T15:46:19.114602Z","steps":["trace[1923572501] 'range keys from in-memory index tree'  (duration: 154.220114ms)"],"step_count":1}
{"level":"info","ts":"2023-10-23T15:48:10.575293Z","caller":"traceutil/trace.go:171","msg":"trace[1712810765] transaction","detail":"{read_only:false; response_revision:9878539; number_of_response:1; }","duration":"134.374164ms","start":"2023-10-23T15:48:10.440901Z","end":"2023-10-23T15:48:10.575276Z","steps":["trace[1712810765] 'process raft request'  (duration: 134.254141ms)"],"step_count":1}
{"level":"warn","ts":"2023-10-23T15:48:10.994234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.398182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/teleport/teleport-kube-agent-1-state\" ","response":"range_response_count:1 size:29711"}
{"level":"warn","ts":"2023-10-23T15:48:10.994256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.426554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:9"}
{"level":"info","ts":"2023-10-23T15:48:10.994297Z","caller":"traceutil/trace.go:171","msg":"trace[2047221455] range","detail":"{range_begin:/registry/secrets/teleport/teleport-kube-agent-1-state; range_end:; response_count:1; response_revision:9878541; }","duration":"204.483749ms","start":"2023-10-23T15:48:10.7898Z","end":"2023-10-23T15:48:10.994284Z","steps":["trace[2047221455] 'range keys from in-memory index tree'  (duration: 203.019854ms)"],"step_count":1}
{"level":"info","ts":"2023-10-23T15:48:10.994314Z","caller":"traceutil/trace.go:171","msg":"trace[355881262] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:9878541; }","duration":"177.499514ms","start":"2023-10-23T15:48:10.816803Z","end":"2023-10-23T15:48:10.994303Z","steps":["trace[355881262] 'count revisions from in-memory index tree'  (duration: 176.053705ms)"],"step_count":1}
{"level":"warn","ts":"2023-10-23T15:48:44.711187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.871497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/calico-cni-plugin\" ","response":"range_response_count:1 size:791"}
{"level":"info","ts":"2023-10-23T15:48:44.711293Z","caller":"traceutil/trace.go:171","msg":"trace[1765193848] range","detail":"{range_begin:/registry/clusterroles/calico-cni-plugin; range_end:; response_count:1; response_revision:9878835; }","duration":"122.994847ms","start":"2023-10-23T15:48:44.588281Z","end":"2023-10-23T15:48:44.711276Z","steps":["trace[1765193848] 'range keys from in-memory index tree'  (duration: 121.514557ms)"],"step_count":1}
{"level":"info","ts":"2023-10-23T15:48:44.937441Z","caller":"traceutil/trace.go:171","msg":"trace[278923898] transaction","detail":"{read_only:false; response_revision:9878837; number_of_response:1; }","duration":"130.867588ms","start":"2023-10-23T15:48:44.80655Z","end":"2023-10-23T15:48:44.937418Z","steps":["trace[278923898] 'process raft request'  (duration: 121.494896ms)"],"step_count":1}
{"level":"info","ts":"2023-10-23T15:48:45.640516Z","caller":"wal/wal.go:785","msg":"created a new WAL segment","path":"/var/lib/rancher/rke2/server/db/etcd/member/wal/000000000000016c-0000000000a0fb6d.wal"}
{"level":"info","ts":"2023-10-23T15:48:52.095184Z","caller":"fileutil/purge.go:85","msg":"purged","path":"/var/lib/rancher/rke2/server/db/etcd/member/wal/0000000000000167-00000000009dccd9.wal"}

so my request also seem to take too long even though they are running on 8gp 4 cpu droplets

@jhoelzel
Copy link

Another interesting tidbit, i just found some pods that reported "ready" but in fact their containers are not running:
grafik

@jhoelzel
Copy link

jhoelzel commented Oct 30, 2023

I still have not found a way around this even though i upgraded to more cpu based masters with dedicated vcpu instead of shared cores.

Again this happened to two out of 12 nodes

New Notes:

  • it happens on suday nights so im very sure it has something to do with the scheduled kured reboot on Sundays
  • again, out of 12 nodes im stuck with two. Now these should not have been updated significantly because they have been freshly updated during install on friday.
  • the actual load of the cluster is minimal. its basically prometheus monitoring a whole lot of nothing. even the applications had been deacted.

Logs:

deploy@nyc3-system-1-dwv5a:~> cat /var/lib/rancher/rke2/agent/logs/kubelet.log | grep "kube-proxy"
I1027 21:05:47.915167    1688 topology_manager.go:210] "Topology Admit Handler" podUID=59284711bbd42ebb6958ed16187634c5 podNamespace="kube-system" podName="kube-proxy-nyc3-system-1-dwv5a"
I1027 21:05:47.935688    1688 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file0\" (UniqueName: \"kubernetes.io/host-path/59284711bbd42ebb6958ed16187634c5-file0\") pod \"kube-proxy-nyc3-system-1-dwv5a\" (UID: \"59284711bbd42ebb6958ed16187634c5\") " pod="kube-system/kube-proxy-nyc3-system-1-dwv5a"
I1027 21:05:47.935772    1688 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file1\" (UniqueName: \"kubernetes.io/host-path/59284711bbd42ebb6958ed16187634c5-file1\") pod \"kube-proxy-nyc3-system-1-dwv5a\" (UID: \"59284711bbd42ebb6958ed16187634c5\") " pod="kube-system/kube-proxy-nyc3-system-1-dwv5a"
I1027 21:05:48.036956    1688 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file2\" (UniqueName: \"kubernetes.io/host-path/59284711bbd42ebb6958ed16187634c5-file2\") pod \"kube-proxy-nyc3-system-1-dwv5a\" (UID: \"59284711bbd42ebb6958ed16187634c5\") " pod="kube-system/kube-proxy-nyc3-system-1-dwv5a"
I1027 21:05:48.037030    1688 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file3\" (UniqueName: \"kubernetes.io/host-path/59284711bbd42ebb6958ed16187634c5-file3\") pod \"kube-proxy-nyc3-system-1-dwv5a\" (UID: \"59284711bbd42ebb6958ed16187634c5\") " pod="kube-system/kube-proxy-nyc3-system-1-dwv5a"
I1027 21:05:57.863316    1688 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nyc3-system-1-dwv5a" podStartSLOduration=9.863242855 pod.CreationTimestamp="2023-10-27 21:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-27 21:05:57.863243762 +0000 UTC m=+10.497042935" watchObservedRunningTime="2023-10-27 21:05:57.863242855 +0000 UTC m=+10.497042033"
E1030 00:34:59.056212    1688 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59284711bbd42ebb6958ed16187634c5" containerName="kube-proxy"
I1030 00:34:59.056278    1688 memory_manager.go:346] "RemoveStaleState removing state" podUID="59284711bbd42ebb6958ed16187634c5" containerName="kube-proxy"
I1030 00:35:52.611605    1616 topology_manager.go:210] "Topology Admit Handler" podUID=59284711bbd42ebb6958ed16187634c5 podNamespace="kube-system" podName="kube-proxy-nyc3-system-1-dwv5a"
I1030 00:35:52.611756    1616 topology_manager.go:210] "Topology Admit Handler" podUID=f270cfde5639c570db4547b5d715659a podNamespace="kube-system" podName="kube-proxy-nyc3-system-1-dwv5a"
E1030 00:35:52.660227    1616 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-proxy-nyc3-system-1-dwv5a\" already exists" pod="kube-system/kube-proxy-nyc3-system-1-dwv5a"
I1030 00:35:52.695134    1616 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file0\" (UniqueName: \"kubernetes.io/host-path/f270cfde5639c570db4547b5d715659a-file0\") pod \"kube-proxy-nyc3-system-1-dwv5a\" (UID: \"f270cfde5639c570db4547b5d715659a\") " pod="kube-system/kube-proxy-nyc3-system-1-dwv5a"
I1030 00:35:52.795514    1616 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file3\" (UniqueName: \"kubernetes.io/host-path/f270cfde5639c570db4547b5d715659a-file3\") pod \"kube-proxy-nyc3-system-1-dwv5a\" (UID: \"f270cfde5639c570db4547b5d715659a\") " pod="kube-system/kube-proxy-nyc3-system-1-dwv5a"
I1030 00:35:52.795728    1616 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file1\" (UniqueName: \"kubernetes.io/host-path/f270cfde5639c570db4547b5d715659a-file1\") pod \"kube-proxy-nyc3-system-1-dwv5a\" (UID: \"f270cfde5639c570db4547b5d715659a\") " pod="kube-system/kube-proxy-nyc3-system-1-dwv5a"
I1030 00:35:52.795774    1616 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file2\" (UniqueName: \"kubernetes.io/host-path/f270cfde5639c570db4547b5d715659a-file2\") pod \"kube-proxy-nyc3-system-1-dwv5a\" (UID: \"f270cfde5639c570db4547b5d715659a\") " pod="kube-system/kube-proxy-nyc3-system-1-dwv5a"
E1030 00:37:55.661050    1616 kubelet.go:1821] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[file3 file0 file1 file2], unattached volumes=[file3 file0 file1 file2]: timed out waiting for the condition" pod="kube-system/kube-proxy-nyc3-system-1-dwv5a"
E1030 00:37:55.661115    1616 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[file3 file0 file1 file2], unattached volumes=[file3 file0 file1 file2]: timed out waiting for the condition" pod="kube-system/kube-proxy-nyc3-system-1-dwv5a" podUID=59284711bbd42ebb6958ed16187634c5

containerd.log around the same time:

time="2023-10-30T00:34:57.788594992Z" level=info msg="StopContainer for \"7e8cea24724232dce1b5018df912c277c6442e7149a1391e8de26792347a0dde\" with timeout 20 (s)"
time="2023-10-30T00:34:57.788753734Z" level=info msg="StopContainer for \"46afa0022d0b9bf0af4a7c38266bbb7fee9b7a25ce593c6ac39696ab00c1f38b\" with timeout 20 (s)"
time="2023-10-30T00:34:57.788850722Z" level=info msg="StopContainer for \"19177a7c0ba079856d0d1f5160e128490a59f3b3744a1aefdd240af043509c06\" with timeout 20 (s)"
time="2023-10-30T00:34:57.788953425Z" level=info msg="StopContainer for \"bbb8616af6a5e104b8f58d65496befc0d2e3fdcf375d22a32b27bcbde84cefaf\" with timeout 20 (s)"
time="2023-10-30T00:34:57.789104570Z" level=info msg="StopContainer for \"1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd\" with timeout 20 (s)"
time="2023-10-30T00:34:57.789112902Z" level=info msg="Stop container \"7e8cea24724232dce1b5018df912c277c6442e7149a1391e8de26792347a0dde\" with signal terminated"
time="2023-10-30T00:34:57.789531390Z" level=info msg="Stop container \"46afa0022d0b9bf0af4a7c38266bbb7fee9b7a25ce593c6ac39696ab00c1f38b\" with signal terminated"
time="2023-10-30T00:34:57.789680656Z" level=info msg="Stop container \"19177a7c0ba079856d0d1f5160e128490a59f3b3744a1aefdd240af043509c06\" with signal terminated"
time="2023-10-30T00:34:57.789723806Z" level=info msg="Stop container \"1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd\" with signal terminated"
time="2023-10-30T00:34:57.789762581Z" level=info msg="Stop container \"bbb8616af6a5e104b8f58d65496befc0d2e3fdcf375d22a32b27bcbde84cefaf\" with signal terminated"
time="2023-10-30T00:34:57.926611483Z" level=info msg="shim disconnected" id=19177a7c0ba079856d0d1f5160e128490a59f3b3744a1aefdd240af043509c06 namespace=k8s.io
time="2023-10-30T00:34:57.926866576Z" level=warning msg="cleaning up after shim disconnected" id=19177a7c0ba079856d0d1f5160e128490a59f3b3744a1aefdd240af043509c06 namespace=k8s.io
time="2023-10-30T00:34:57.926895182Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:34:57.959013355Z" level=info msg="shim disconnected" id=1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd namespace=k8s.io
time="2023-10-30T00:34:57.959092485Z" level=warning msg="cleaning up after shim disconnected" id=1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd namespace=k8s.io
time="2023-10-30T00:34:57.959112143Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:34:57.966042247Z" level=info msg="shim disconnected" id=bbb8616af6a5e104b8f58d65496befc0d2e3fdcf375d22a32b27bcbde84cefaf namespace=k8s.io
time="2023-10-30T00:34:57.966145878Z" level=warning msg="cleaning up after shim disconnected" id=bbb8616af6a5e104b8f58d65496befc0d2e3fdcf375d22a32b27bcbde84cefaf namespace=k8s.io
time="2023-10-30T00:34:57.966157447Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:34:58.005794502Z" level=info msg="StopContainer for \"19177a7c0ba079856d0d1f5160e128490a59f3b3744a1aefdd240af043509c06\" returns successfully"
time="2023-10-30T00:34:58.006515995Z" level=info msg="StopPodSandbox for \"41b1bee0504caef7fdbc4c1f853cd89edf0091c06c4931ede46e08a09a19d828\""
time="2023-10-30T00:34:58.006617930Z" level=info msg="Container to stop \"19177a7c0ba079856d0d1f5160e128490a59f3b3744a1aefdd240af043509c06\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:34:58.022986435Z" level=info msg="StopContainer for \"1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd\" returns successfully"
time="2023-10-30T00:34:58.023623062Z" level=info msg="StopPodSandbox for \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\""
time="2023-10-30T00:34:58.023716333Z" level=info msg="Container to stop \"1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:34:58.041875368Z" level=info msg="StopContainer for \"bbb8616af6a5e104b8f58d65496befc0d2e3fdcf375d22a32b27bcbde84cefaf\" returns successfully"
time="2023-10-30T00:34:58.042556193Z" level=info msg="StopPodSandbox for \"231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e\""
time="2023-10-30T00:34:58.042656055Z" level=info msg="Container to stop \"bbb8616af6a5e104b8f58d65496befc0d2e3fdcf375d22a32b27bcbde84cefaf\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:34:58.110580912Z" level=info msg="shim disconnected" id=41b1bee0504caef7fdbc4c1f853cd89edf0091c06c4931ede46e08a09a19d828 namespace=k8s.io
time="2023-10-30T00:34:58.110680974Z" level=warning msg="cleaning up after shim disconnected" id=41b1bee0504caef7fdbc4c1f853cd89edf0091c06c4931ede46e08a09a19d828 namespace=k8s.io
time="2023-10-30T00:34:58.110705532Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:34:58.171939332Z" level=info msg="shim disconnected" id=25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3 namespace=k8s.io
time="2023-10-30T00:34:58.172021218Z" level=warning msg="cleaning up after shim disconnected" id=25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3 namespace=k8s.io
time="2023-10-30T00:34:58.172033611Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:34:58.185723054Z" level=info msg="TearDown network for sandbox \"41b1bee0504caef7fdbc4c1f853cd89edf0091c06c4931ede46e08a09a19d828\" successfully"
time="2023-10-30T00:34:58.185781354Z" level=info msg="StopPodSandbox for \"41b1bee0504caef7fdbc4c1f853cd89edf0091c06c4931ede46e08a09a19d828\" returns successfully"
time="2023-10-30T00:34:58.238639280Z" level=info msg="TearDown network for sandbox \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\" successfully"
time="2023-10-30T00:34:58.238695448Z" level=info msg="StopPodSandbox for \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\" returns successfully"
time="2023-10-30T00:34:58.243144456Z" level=info msg="shim disconnected" id=231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e namespace=k8s.io
time="2023-10-30T00:34:58.243245111Z" level=warning msg="cleaning up after shim disconnected" id=231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e namespace=k8s.io
time="2023-10-30T00:34:58.243256987Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:34:58.264224318Z" level=info msg="shim disconnected" id=7e8cea24724232dce1b5018df912c277c6442e7149a1391e8de26792347a0dde namespace=k8s.io
time="2023-10-30T00:34:58.264334560Z" level=warning msg="cleaning up after shim disconnected" id=7e8cea24724232dce1b5018df912c277c6442e7149a1391e8de26792347a0dde namespace=k8s.io
time="2023-10-30T00:34:58.264349116Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:34:58.330923393Z" level=info msg="StopContainer for \"7e8cea24724232dce1b5018df912c277c6442e7149a1391e8de26792347a0dde\" returns successfully"
time="2023-10-30T00:34:58.331515334Z" level=info msg="StopPodSandbox for \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\""
time="2023-10-30T00:34:58.331591191Z" level=info msg="Container to stop \"7e8cea24724232dce1b5018df912c277c6442e7149a1391e8de26792347a0dde\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:34:58.443643636Z" level=info msg="shim disconnected" id=4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320 namespace=k8s.io
time="2023-10-30T00:34:58.443726609Z" level=warning msg="cleaning up after shim disconnected" id=4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320 namespace=k8s.io
time="2023-10-30T00:34:58.443746937Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:34:58.524078027Z" level=info msg="TearDown network for sandbox \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\" successfully"
time="2023-10-30T00:34:58.524148924Z" level=info msg="StopPodSandbox for \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\" returns successfully"
2023-10-30 00:34:58.384 [INFO][7526] k8s.go 576: Cleaning up netns ContainerID="231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e"
2023-10-30 00:34:58.385 [INFO][7526] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e" iface="eth0" netns="/var/run/netns/cni-bbf1944e-5307-d677-8a7b-cc12083be4b8"
2023-10-30 00:34:58.385 [INFO][7526] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e" iface="eth0" netns="/var/run/netns/cni-bbf1944e-5307-d677-8a7b-cc12083be4b8"
2023-10-30 00:34:58.443 [INFO][7526] dataplane_linux.go 569: Deleted device in netns. ContainerID="231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e" after=58.263075ms iface="eth0" netns="/var/run/netns/cni-bbf1944e-5307-d677-8a7b-cc12083be4b8"
2023-10-30 00:34:58.444 [INFO][7526] k8s.go 583: Releasing IP address(es) ContainerID="231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e"
2023-10-30 00:34:58.444 [INFO][7526] utils.go 195: Calico CNI releasing IP address ContainerID="231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e"
2023-10-30 00:34:58.497 [INFO][7559] ipam_plugin.go 415: Releasing address using handleID ContainerID="231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e" HandleID="k8s-pod-network.231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e" Workload="nyc3--system--1--dwv5a-k8s-kured--d8rh6-eth0"
2023-10-30 00:34:58.498 [INFO][7559] ipam_plugin.go 356: About to acquire host-wide IPAM lock.
2023-10-30 00:34:58.498 [INFO][7559] ipam_plugin.go 371: Acquired host-wide IPAM lock.
2023-10-30 00:34:58.555 [INFO][7559] ipam_plugin.go 434: Released address using handleID ContainerID="231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e" HandleID="k8s-pod-network.231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e" Workload="nyc3--system--1--dwv5a-k8s-kured--d8rh6-eth0"
2023-10-30 00:34:58.556 [INFO][7559] ipam_plugin.go 443: Releasing address using workloadID ContainerID="231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e" HandleID="k8s-pod-network.231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e" Workload="nyc3--system--1--dwv5a-k8s-kured--d8rh6-eth0"
2023-10-30 00:34:58.558 [INFO][7559] ipam_plugin.go 377: Released host-wide IPAM lock.
2023-10-30 00:34:58.560 [INFO][7526] k8s.go 589: Teardown processing complete. ContainerID="231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e"
time="2023-10-30T00:34:58.563611455Z" level=info msg="TearDown network for sandbox \"231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e\" successfully"
time="2023-10-30T00:34:58.563664846Z" level=info msg="StopPodSandbox for \"231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e\" returns successfully"
time="2023-10-30T00:34:59.818757694Z" level=info msg="StopContainer for \"1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd\" with timeout 1 (s)"
time="2023-10-30T00:34:59.818842379Z" level=info msg="Container to stop \"1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:34:59.818944171Z" level=info msg="StopContainer for \"1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd\" returns successfully"
time="2023-10-30T00:34:59.819398852Z" level=info msg="StopPodSandbox for \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\""
time="2023-10-30T00:34:59.819481325Z" level=info msg="Container to stop \"1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:34:59.819688643Z" level=info msg="TearDown network for sandbox \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\" successfully"
time="2023-10-30T00:34:59.819719303Z" level=info msg="StopPodSandbox for \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\" returns successfully"
time="2023-10-30T00:35:00.948461344Z" level=info msg="shim disconnected" id=46afa0022d0b9bf0af4a7c38266bbb7fee9b7a25ce593c6ac39696ab00c1f38b namespace=k8s.io
time="2023-10-30T00:35:00.948536898Z" level=warning msg="cleaning up after shim disconnected" id=46afa0022d0b9bf0af4a7c38266bbb7fee9b7a25ce593c6ac39696ab00c1f38b namespace=k8s.io
time="2023-10-30T00:35:00.948559147Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:35:01.017000154Z" level=info msg="StopContainer for \"46afa0022d0b9bf0af4a7c38266bbb7fee9b7a25ce593c6ac39696ab00c1f38b\" returns successfully"
time="2023-10-30T00:35:01.017808688Z" level=info msg="StopPodSandbox for \"9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211\""
time="2023-10-30T00:35:01.017907668Z" level=info msg="Container to stop \"46afa0022d0b9bf0af4a7c38266bbb7fee9b7a25ce593c6ac39696ab00c1f38b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:35:01.223090822Z" level=info msg="shim disconnected" id=9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211 namespace=k8s.io
time="2023-10-30T00:35:01.223182368Z" level=warning msg="cleaning up after shim disconnected" id=9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211 namespace=k8s.io
time="2023-10-30T00:35:01.223197683Z" level=info msg="cleaning up dead shim" namespace=k8s.io
2023-10-30 00:35:01.433 [INFO][7665] k8s.go 576: Cleaning up netns ContainerID="9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211"
2023-10-30 00:35:01.434 [INFO][7665] dataplane_linux.go 524: Deleting workload's device in netns. ContainerID="9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211" iface="eth0" netns="/var/run/netns/cni-b3fcb097-e37a-f19f-34f5-96a449ac88f6"
2023-10-30 00:35:01.434 [INFO][7665] dataplane_linux.go 535: Entered netns, deleting veth. ContainerID="9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211" iface="eth0" netns="/var/run/netns/cni-b3fcb097-e37a-f19f-34f5-96a449ac88f6"
2023-10-30 00:35:01.512 [INFO][7665] dataplane_linux.go 569: Deleted device in netns. ContainerID="9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211" after=77.647916ms iface="eth0" netns="/var/run/netns/cni-b3fcb097-e37a-f19f-34f5-96a449ac88f6"
2023-10-30 00:35:01.512 [INFO][7665] k8s.go 583: Releasing IP address(es) ContainerID="9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211"
2023-10-30 00:35:01.512 [INFO][7665] utils.go 195: Calico CNI releasing IP address ContainerID="9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211"
2023-10-30 00:35:01.562 [INFO][7676] ipam_plugin.go 415: Releasing address using handleID ContainerID="9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211" HandleID="k8s-pod-network.9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211" Workload="nyc3--system--1--dwv5a-k8s-teleport--kube--agent--0-eth0"
2023-10-30 00:35:01.562 [INFO][7676] ipam_plugin.go 356: About to acquire host-wide IPAM lock.
2023-10-30 00:35:01.562 [INFO][7676] ipam_plugin.go 371: Acquired host-wide IPAM lock.
2023-10-30 00:35:01.649 [INFO][7676] ipam_plugin.go 434: Released address using handleID ContainerID="9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211" HandleID="k8s-pod-network.9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211" Workload="nyc3--system--1--dwv5a-k8s-teleport--kube--agent--0-eth0"
2023-10-30 00:35:01.649 [INFO][7676] ipam_plugin.go 443: Releasing address using workloadID ContainerID="9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211" HandleID="k8s-pod-network.9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211" Workload="nyc3--system--1--dwv5a-k8s-teleport--kube--agent--0-eth0"
2023-10-30 00:35:01.660 [INFO][7676] ipam_plugin.go 377: Released host-wide IPAM lock.
2023-10-30 00:35:01.662 [INFO][7665] k8s.go 589: Teardown processing complete. ContainerID="9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211"
time="2023-10-30T00:35:01.665972128Z" level=info msg="TearDown network for sandbox \"9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211\" successfully"
time="2023-10-30T00:35:01.666018294Z" level=info msg="StopPodSandbox for \"9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211\" returns successfully"
time="2023-10-30T00:35:17.788861923Z" level=info msg="StopContainer for \"9e3002a3f57a410241bf6990cb776914c5f3e61a843275911cfae6963c3d5788\" with timeout 30 (s)"
time="2023-10-30T00:35:17.788887506Z" level=info msg="StopContainer for \"d8943f5461a4bd0f754b857b930348f2f03445c8adfe394fd786f2efa2cac342\" with timeout 30 (s)"
time="2023-10-30T00:35:17.789602259Z" level=info msg="Stop container \"9e3002a3f57a410241bf6990cb776914c5f3e61a843275911cfae6963c3d5788\" with signal terminated"
time="2023-10-30T00:35:17.789677022Z" level=info msg="Stop container \"d8943f5461a4bd0f754b857b930348f2f03445c8adfe394fd786f2efa2cac342\" with signal terminated"
time="2023-10-30T00:35:17.938475840Z" level=info msg="shim disconnected" id=9e3002a3f57a410241bf6990cb776914c5f3e61a843275911cfae6963c3d5788 namespace=k8s.io
time="2023-10-30T00:35:17.938563735Z" level=warning msg="cleaning up after shim disconnected" id=9e3002a3f57a410241bf6990cb776914c5f3e61a843275911cfae6963c3d5788 namespace=k8s.io
time="2023-10-30T00:35:17.938575946Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:35:17.954207688Z" level=info msg="shim disconnected" id=d8943f5461a4bd0f754b857b930348f2f03445c8adfe394fd786f2efa2cac342 namespace=k8s.io
time="2023-10-30T00:35:17.954288800Z" level=warning msg="cleaning up after shim disconnected" id=d8943f5461a4bd0f754b857b930348f2f03445c8adfe394fd786f2efa2cac342 namespace=k8s.io
time="2023-10-30T00:35:17.954301182Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:35:17.967432339Z" level=info msg="StopContainer for \"9ae6a9821b8d150c9d3ddcf2fb356d4b993bb163fd1975acbadd869e5383b267\" with timeout 5 (s)"
time="2023-10-30T00:35:17.967827709Z" level=info msg="Stop container \"9ae6a9821b8d150c9d3ddcf2fb356d4b993bb163fd1975acbadd869e5383b267\" with signal terminated"
time="2023-10-30T00:35:18.021732665Z" level=info msg="StopContainer for \"9e3002a3f57a410241bf6990cb776914c5f3e61a843275911cfae6963c3d5788\" returns successfully"
time="2023-10-30T00:35:18.033573016Z" level=info msg="StopContainer for \"d8943f5461a4bd0f754b857b930348f2f03445c8adfe394fd786f2efa2cac342\" returns successfully"
time="2023-10-30T00:35:18.035420749Z" level=info msg="StopPodSandbox for \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\""
time="2023-10-30T00:35:18.035525332Z" level=info msg="Container to stop \"cdc14627439549cbb3b374b12c9a35a701b98d9315c12473781c928012e930a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:35:18.035561019Z" level=info msg="Container to stop \"d8943f5461a4bd0f754b857b930348f2f03445c8adfe394fd786f2efa2cac342\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:35:18.035581891Z" level=info msg="Container to stop \"9e3002a3f57a410241bf6990cb776914c5f3e61a843275911cfae6963c3d5788\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:35:18.108626070Z" level=info msg="shim disconnected" id=9ae6a9821b8d150c9d3ddcf2fb356d4b993bb163fd1975acbadd869e5383b267 namespace=k8s.io
time="2023-10-30T00:35:18.108699411Z" level=warning msg="cleaning up after shim disconnected" id=9ae6a9821b8d150c9d3ddcf2fb356d4b993bb163fd1975acbadd869e5383b267 namespace=k8s.io
time="2023-10-30T00:35:18.108712818Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:35:18.181672662Z" level=info msg="shim disconnected" id=4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89 namespace=k8s.io
time="2023-10-30T00:35:18.181749651Z" level=warning msg="cleaning up after shim disconnected" id=4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89 namespace=k8s.io
time="2023-10-30T00:35:18.181761049Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:35:18.188117596Z" level=info msg="StopContainer for \"9ae6a9821b8d150c9d3ddcf2fb356d4b993bb163fd1975acbadd869e5383b267\" returns successfully"
time="2023-10-30T00:35:18.188795699Z" level=info msg="StopPodSandbox for \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\""
time="2023-10-30T00:35:18.188889190Z" level=info msg="Container to stop \"1da33f152efab4ed94a27de1b211e784002ba565946cd1fd5b2a2393e4b69bff\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:35:18.188932278Z" level=info msg="Container to stop \"9ae6a9821b8d150c9d3ddcf2fb356d4b993bb163fd1975acbadd869e5383b267\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:35:18.188950449Z" level=info msg="Container to stop \"ffc8f525831a84153a7be1072b1fb17c25323d7a678de9f610291df725f5e65c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:35:18.244483891Z" level=info msg="TearDown network for sandbox \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\" successfully"
time="2023-10-30T00:35:18.244554790Z" level=info msg="StopPodSandbox for \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\" returns successfully"
time="2023-10-30T00:35:18.307271603Z" level=info msg="shim disconnected" id=92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde namespace=k8s.io
time="2023-10-30T00:35:18.307453642Z" level=warning msg="cleaning up after shim disconnected" id=92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde namespace=k8s.io
time="2023-10-30T00:35:18.307468938Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:35:18.362749901Z" level=info msg="TearDown network for sandbox \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\" successfully"
time="2023-10-30T00:35:18.362822591Z" level=info msg="StopPodSandbox for \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\" returns successfully"
time="2023-10-30T00:35:18.419842198Z" level=info msg="StopContainer for \"b3853ee6bb4406e90bce38914f4c5366274643a6df8ac41bde7e4702b8d50d46\" with timeout 20 (s)"
time="2023-10-30T00:35:18.424145658Z" level=info msg="Stop container \"b3853ee6bb4406e90bce38914f4c5366274643a6df8ac41bde7e4702b8d50d46\" with signal terminated"
time="2023-10-30T00:35:51.057856525Z" level=info msg="starting containerd" revision=383ce4e834e4d2ae5e1869475379e70618bdcc33 version=v1.7.3-k3s1
time="2023-10-30T00:35:51.093474228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T00:35:51.097988825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /usr/lib/modules/6.5.8-1-default\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2023-10-30T00:35:51.098061698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T00:35:51.098473806Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2023-10-30T00:35:51.098556759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T00:35:51.098599108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T00:35:51.098929792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.fuse-overlayfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T00:35:51.099003203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.stargz\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T00:35:51.100241855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T00:35:51.100284445Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2023-10-30T00:35:51.100303252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T00:35:51.100519068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2023-10-30T00:35:51.100551464Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2023-10-30T00:35:51.100580166Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2023-10-30T00:35:51.100595544Z" level=info msg="metadata content store policy set" policy=shared
time="2023-10-30T00:35:51.221546404Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2023-10-30T00:35:51.221623208Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
time="2023-10-30T00:35:51.221648870Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2023-10-30T00:35:51.221724581Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
time="2023-10-30T00:35:51.221762014Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
time="2023-10-30T00:35:51.221789863Z" level=info msg="NRI interface is disabled by configuration."
time="2023-10-30T00:35:51.221829349Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2023-10-30T00:35:51.222095026Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
time="2023-10-30T00:35:51.222126131Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
time="2023-10-30T00:35:51.222149451Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
time="2023-10-30T00:35:51.222169891Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
time="2023-10-30T00:35:51.222202134Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2023-10-30T00:35:51.222228427Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2023-10-30T00:35:51.222375586Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2023-10-30T00:35:51.222399381Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2023-10-30T00:35:51.222426056Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2023-10-30T00:35:51.222451877Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2023-10-30T00:35:51.222471730Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2023-10-30T00:35:51.222489750Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2023-10-30T00:35:51.222611428Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
time="2023-10-30T00:35:51.223115981Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
time="2023-10-30T00:35:51.223189658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223217364Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
time="2023-10-30T00:35:51.223272151Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
time="2023-10-30T00:35:51.223402908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223440277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223476249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223501622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223522316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223541302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223559538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223579288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223603494Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
time="2023-10-30T00:35:51.223685189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223710583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223727910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223744864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223762154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223781730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223798843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.223816377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
time="2023-10-30T00:35:51.224191776Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:index.docker.io/rancher/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:true EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:1m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/rancher/rke2/agent/containerd ContainerdEndpoint:/run/k3s/containerd/containerd.sock RootDir:/var/lib/rancher/rke2/agent/containerd/io.containerd.grpc.v1.cri StateDir:/run/k3s/containerd/io.containerd.grpc.v1.cri}"
time="2023-10-30T00:35:51.224276944Z" level=info msg="Connect containerd service"
time="2023-10-30T00:35:51.224322241Z" level=info msg="using legacy CRI server"
time="2023-10-30T00:35:51.224331770Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
time="2023-10-30T00:35:51.224400356Z" level=info msg="Get image filesystem path \"/var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.overlayfs\""
time="2023-10-30T00:35:51.226975727Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
time="2023-10-30T00:35:51.227033253Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
time="2023-10-30T00:35:51.227055077Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
time="2023-10-30T00:35:51.227072612Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
time="2023-10-30T00:35:51.227140230Z" level=info msg="Start subscribing containerd event"
time="2023-10-30T00:35:51.227742341Z" level=info msg="Start recovering state"
time="2023-10-30T00:35:51.228765853Z" level=info msg=serving... address=/run/k3s/containerd/containerd.sock.ttrpc
time="2023-10-30T00:35:51.228848345Z" level=info msg=serving... address=/run/k3s/containerd/containerd.sock
time="2023-10-30T00:35:51.384819979Z" level=info msg="Start event monitor"
time="2023-10-30T00:35:51.384881772Z" level=info msg="Start snapshots syncer"
time="2023-10-30T00:35:51.384898231Z" level=info msg="Start cni network conf syncer for default"
time="2023-10-30T00:35:51.384908716Z" level=info msg="Start streaming server"
time="2023-10-30T00:35:51.384983283Z" level=info msg="containerd successfully booted in 0.328258s"
time="2023-10-30T00:35:52.405246764Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
time="2023-10-30T00:35:53.616267848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-prometheus-stack-prometheus-node-exporter-tcsb2,Uid:a81aa0ac-86c7-4fbf-9bdf-cfb86567d39e,Namespace:kube-prometheus-exporter,Attempt:0,}"
time="2023-10-30T00:35:53.716855604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-10-30T00:35:53.717146364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-10-30T00:35:53.717168780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-10-30T00:35:53.717183682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-10-30T00:35:53.917686827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-prometheus-stack-prometheus-node-exporter-tcsb2,Uid:a81aa0ac-86c7-4fbf-9bdf-cfb86567d39e,Namespace:kube-prometheus-exporter,Attempt:0,} returns sandbox id \"875e93f7b26fecc8e606ca3135828690388420accfae60de5889bcba6264c6bf\""
time="2023-10-30T00:35:53.919786346Z" level=info msg="PullImage \"quay.io/prometheus/node-exporter:v1.6.1\""
time="2023-10-30T00:35:54.118613576Z" level=info msg="ImageUpdate event name:\"quay.io/prometheus/node-exporter:v1.6.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:54.127755572Z" level=info msg="stop pulling image quay.io/prometheus/node-exporter:v1.6.1: active requests=0, bytes read=0"
time="2023-10-30T00:35:54.136187172Z" level=info msg="ImageUpdate event name:\"sha256:458e026e6aa62a8da4522cb09766da69d7365ebeb456d5a43a214fc6bd232a3c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:54.147152692Z" level=info msg="ImageUpdate event name:\"quay.io/prometheus/node-exporter:v1.6.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:54.157608901Z" level=info msg="ImageUpdate event name:\"quay.io/prometheus/node-exporter@sha256:81f94e50ea37a88dfee849d0f4acad25b96b397061f59e5095905f6bc5829637\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:54.159884716Z" level=info msg="Pulled image \"quay.io/prometheus/node-exporter:v1.6.1\" with image id \"sha256:458e026e6aa62a8da4522cb09766da69d7365ebeb456d5a43a214fc6bd232a3c\", repo tag \"quay.io/prometheus/node-exporter:v1.6.1\", repo digest \"quay.io/prometheus/node-exporter@sha256:81f94e50ea37a88dfee849d0f4acad25b96b397061f59e5095905f6bc5829637\", size \"11725464\" in 240.047401ms"
time="2023-10-30T00:35:54.159967617Z" level=info msg="PullImage \"quay.io/prometheus/node-exporter:v1.6.1\" returns image reference \"sha256:458e026e6aa62a8da4522cb09766da69d7365ebeb456d5a43a214fc6bd232a3c\""
time="2023-10-30T00:35:54.163513347Z" level=info msg="CreateContainer within sandbox \"875e93f7b26fecc8e606ca3135828690388420accfae60de5889bcba6264c6bf\" for container &ContainerMetadata{Name:node-exporter,Attempt:0,}"
time="2023-10-30T00:35:54.232088037Z" level=info msg="CreateContainer within sandbox \"875e93f7b26fecc8e606ca3135828690388420accfae60de5889bcba6264c6bf\" for &ContainerMetadata{Name:node-exporter,Attempt:0,} returns container id \"eba8e768119cce13111d6cd0d99877e518c140912be6075ef81bc5748df49bfd\""
time="2023-10-30T00:35:54.233216992Z" level=info msg="StartContainer for \"eba8e768119cce13111d6cd0d99877e518c140912be6075ef81bc5748df49bfd\""
time="2023-10-30T00:35:54.446379551Z" level=info msg="StartContainer for \"eba8e768119cce13111d6cd0d99877e518c140912be6075ef81bc5748df49bfd\" returns successfully"
time="2023-10-30T00:35:56.148532531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-do-node-sdg7v,Uid:0a08201c-8a7e-4581-87b1-c3b8b9e940ca,Namespace:digitalocean,Attempt:0,}"
time="2023-10-30T00:35:56.259524124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-10-30T00:35:56.259749014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-10-30T00:35:56.259777810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-10-30T00:35:56.259792525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-10-30T00:35:56.440078480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:k8s-staticroute-operator-njl95,Uid:8d0e2210-bab8-4314-82ab-bf898d68dac6,Namespace:digitalocean,Attempt:0,}"
time="2023-10-30T00:35:56.450864095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-do-node-sdg7v,Uid:0a08201c-8a7e-4581-87b1-c3b8b9e940ca,Namespace:digitalocean,Attempt:0,} returns sandbox id \"d3c9102d9b7c2d4537482eb12b968038fd20159322cab9ab51c4435c916865c9\""
time="2023-10-30T00:35:56.453259890Z" level=info msg="PullImage \"alpine:3\""
time="2023-10-30T00:35:56.557079933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-10-30T00:35:56.557208394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-10-30T00:35:56.557234981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-10-30T00:35:56.557251833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-10-30T00:35:56.630716978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kb4ln,Uid:01b48840-ec37-4e5b-a778-f62bf1d2a273,Namespace:calico-system,Attempt:0,}"
time="2023-10-30T00:35:56.770080571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:k8s-staticroute-operator-njl95,Uid:8d0e2210-bab8-4314-82ab-bf898d68dac6,Namespace:digitalocean,Attempt:0,} returns sandbox id \"c967b23de55ed1e7750c26c9338a12381fd20616645ef3283be57ebd4dd41e03\""
time="2023-10-30T00:35:56.772382379Z" level=info msg="PullImage \"digitalocean/k8s-staticroute-operator:v1.0.0\""
time="2023-10-30T00:35:56.790463962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-10-30T00:35:56.790592320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-10-30T00:35:56.790647911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-10-30T00:35:56.790671729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-10-30T00:35:56.893870538Z" level=info msg="ImageUpdate event name:\"docker.io/library/alpine:3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:56.900998194Z" level=info msg="stop pulling image docker.io/library/alpine:3: active requests=0, bytes read=4395"
time="2023-10-30T00:35:56.906698562Z" level=info msg="ImageUpdate event name:\"sha256:8ca4688f4f356596b5ae539337c9941abc78eda10021d35cbc52659c74d9b443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:56.912646565Z" level=info msg="ImageUpdate event name:\"docker.io/library/alpine:3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:56.920906391Z" level=info msg="ImageUpdate event name:\"docker.io/library/alpine@sha256:eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:56.923044247Z" level=info msg="Pulled image \"alpine:3\" with image id \"sha256:8ca4688f4f356596b5ae539337c9941abc78eda10021d35cbc52659c74d9b443\", repo tag \"docker.io/library/alpine:3\", repo digest \"docker.io/library/alpine@sha256:eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978\", size \"3405605\" in 469.727136ms"
time="2023-10-30T00:35:56.923215981Z" level=info msg="PullImage \"alpine:3\" returns image reference \"sha256:8ca4688f4f356596b5ae539337c9941abc78eda10021d35cbc52659c74d9b443\""
time="2023-10-30T00:35:56.926693840Z" level=info msg="CreateContainer within sandbox \"d3c9102d9b7c2d4537482eb12b968038fd20159322cab9ab51c4435c916865c9\" for container &ContainerMetadata{Name:automount-udev-deleter,Attempt:0,}"
time="2023-10-30T00:35:57.004779172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kb4ln,Uid:01b48840-ec37-4e5b-a778-f62bf1d2a273,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\""
time="2023-10-30T00:35:57.007415611Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1\""
time="2023-10-30T00:35:57.013570780Z" level=info msg="CreateContainer within sandbox \"d3c9102d9b7c2d4537482eb12b968038fd20159322cab9ab51c4435c916865c9\" for &ContainerMetadata{Name:automount-udev-deleter,Attempt:0,} returns container id \"4905c18e88992ac1d4cf57148d62686cce74e84fd62aa9206838120cb9afd122\""
time="2023-10-30T00:35:57.014132959Z" level=info msg="StartContainer for \"4905c18e88992ac1d4cf57148d62686cce74e84fd62aa9206838120cb9afd122\""
time="2023-10-30T00:35:57.045166508Z" level=info msg="ImageUpdate event name:\"docker.io/digitalocean/k8s-staticroute-operator:v1.0.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.049299556Z" level=info msg="stop pulling image docker.io/digitalocean/k8s-staticroute-operator:v1.0.0: active requests=0, bytes read=4457"
time="2023-10-30T00:35:57.054371666Z" level=info msg="ImageUpdate event name:\"sha256:6b1004bb4727cdd6668b4fc9adcd1f629bdeabc804f68ec37c6ed32cc9afb601\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.063134857Z" level=info msg="ImageUpdate event name:\"docker.io/digitalocean/k8s-staticroute-operator:v1.0.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.068849014Z" level=info msg="ImageUpdate event name:\"docker.io/digitalocean/k8s-staticroute-operator@sha256:406f5e647245232ab6608a952bb0789c2ea22dc998aa15cb109bf7cbdb8809a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.070429889Z" level=info msg="Pulled image \"digitalocean/k8s-staticroute-operator:v1.0.0\" with image id \"sha256:6b1004bb4727cdd6668b4fc9adcd1f629bdeabc804f68ec37c6ed32cc9afb601\", repo tag \"docker.io/digitalocean/k8s-staticroute-operator:v1.0.0\", repo digest \"docker.io/digitalocean/k8s-staticroute-operator@sha256:406f5e647245232ab6608a952bb0789c2ea22dc998aa15cb109bf7cbdb8809a6\", size \"49762760\" in 297.992322ms"
time="2023-10-30T00:35:57.070553876Z" level=info msg="PullImage \"digitalocean/k8s-staticroute-operator:v1.0.0\" returns image reference \"sha256:6b1004bb4727cdd6668b4fc9adcd1f629bdeabc804f68ec37c6ed32cc9afb601\""
time="2023-10-30T00:35:57.073259750Z" level=info msg="CreateContainer within sandbox \"c967b23de55ed1e7750c26c9338a12381fd20616645ef3283be57ebd4dd41e03\" for container &ContainerMetadata{Name:k8s-staticroute-operator,Attempt:0,}"
time="2023-10-30T00:35:57.118643584Z" level=info msg="CreateContainer within sandbox \"c967b23de55ed1e7750c26c9338a12381fd20616645ef3283be57ebd4dd41e03\" for &ContainerMetadata{Name:k8s-staticroute-operator,Attempt:0,} returns container id \"b7f9596eab52133dbaf27e658226f891dc303bb9236c7d123755945fdb6c69c2\""
time="2023-10-30T00:35:57.119551786Z" level=info msg="StartContainer for \"b7f9596eab52133dbaf27e658226f891dc303bb9236c7d123755945fdb6c69c2\""
time="2023-10-30T00:35:57.320052305Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.323039994Z" level=info msg="StartContainer for \"4905c18e88992ac1d4cf57148d62686cce74e84fd62aa9206838120cb9afd122\" returns successfully"
time="2023-10-30T00:35:57.347853269Z" level=info msg="stop pulling image docker.io/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1: active requests=0, bytes read=4471"
time="2023-10-30T00:35:57.356065678Z" level=info msg="ImageUpdate event name:\"sha256:092a973bb20eec1356007c2841ca77949f731e21ca2f8abee16d4e52bbffe55d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.371204100Z" level=info msg="StartContainer for \"b7f9596eab52133dbaf27e658226f891dc303bb9236c7d123755945fdb6c69c2\" returns successfully"
time="2023-10-30T00:35:57.373119957Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.391103636Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-pod2daemon-flexvol@sha256:f490933d59c85bfb33530b762aa8040d9810e2da1c2fb3e039118bfaed2de14c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.393753123Z" level=info msg="Pulled image \"docker.io/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1\" with image id \"sha256:092a973bb20eec1356007c2841ca77949f731e21ca2f8abee16d4e52bbffe55d\", repo tag \"docker.io/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1\", repo digest \"docker.io/rancher/mirrored-calico-pod2daemon-flexvol@sha256:f490933d59c85bfb33530b762aa8040d9810e2da1c2fb3e039118bfaed2de14c\", size \"7290863\" in 386.253071ms"
time="2023-10-30T00:35:57.393841092Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1\" returns image reference \"sha256:092a973bb20eec1356007c2841ca77949f731e21ca2f8abee16d4e52bbffe55d\""
time="2023-10-30T00:35:57.398338796Z" level=info msg="CreateContainer within sandbox \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
time="2023-10-30T00:35:57.448165697Z" level=info msg="shim disconnected" id=4905c18e88992ac1d4cf57148d62686cce74e84fd62aa9206838120cb9afd122 namespace=k8s.io
time="2023-10-30T00:35:57.448285380Z" level=warning msg="cleaning up after shim disconnected" id=4905c18e88992ac1d4cf57148d62686cce74e84fd62aa9206838120cb9afd122 namespace=k8s.io
time="2023-10-30T00:35:57.448300059Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:35:57.498128000Z" level=info msg="CreateContainer within sandbox \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f027cf16d6e07e9952ffb3705de26ad314c3113871095bc5238398f7f5ab89b7\""
time="2023-10-30T00:35:57.509884357Z" level=info msg="StartContainer for \"f027cf16d6e07e9952ffb3705de26ad314c3113871095bc5238398f7f5ab89b7\""
time="2023-10-30T00:35:57.552000408Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0\""
time="2023-10-30T00:35:57.722247724Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.735707212Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0: active requests=0, bytes read=0"
time="2023-10-30T00:35:57.742985862Z" level=info msg="ImageUpdate event name:\"sha256:f84707403d4277debb0f5ba6fe157a8f474b5c5d2abda76a3f26b4f8ca794ee0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.750967903Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.756119827Z" level=info msg="StartContainer for \"f027cf16d6e07e9952ffb3705de26ad314c3113871095bc5238398f7f5ab89b7\" returns successfully"
time="2023-10-30T00:35:57.770741586Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f6717ce72a2615c7fbc746b4068f788e78579c54c43b8716e5ce650d97af2df1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:57.773621233Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0\" with image id \"sha256:f84707403d4277debb0f5ba6fe157a8f474b5c5d2abda76a3f26b4f8ca794ee0\", repo tag \"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0\", repo digest \"registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f6717ce72a2615c7fbc746b4068f788e78579c54c43b8716e5ce650d97af2df1\", size \"10536524\" in 221.559056ms"
time="2023-10-30T00:35:57.773713524Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0\" returns image reference \"sha256:f84707403d4277debb0f5ba6fe157a8f474b5c5d2abda76a3f26b4f8ca794ee0\""
time="2023-10-30T00:35:57.777186463Z" level=info msg="CreateContainer within sandbox \"d3c9102d9b7c2d4537482eb12b968038fd20159322cab9ab51c4435c916865c9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
time="2023-10-30T00:35:57.838749642Z" level=info msg="CreateContainer within sandbox \"d3c9102d9b7c2d4537482eb12b968038fd20159322cab9ab51c4435c916865c9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2d037464af7e6ef63816b46f8c05a0b5343c264c60e48cf8a1df6d7d7e0c5ffb\""
time="2023-10-30T00:35:57.839672830Z" level=info msg="StartContainer for \"2d037464af7e6ef63816b46f8c05a0b5343c264c60e48cf8a1df6d7d7e0c5ffb\""
time="2023-10-30T00:35:57.941199510Z" level=info msg="shim disconnected" id=f027cf16d6e07e9952ffb3705de26ad314c3113871095bc5238398f7f5ab89b7 namespace=k8s.io
time="2023-10-30T00:35:57.941287049Z" level=warning msg="cleaning up after shim disconnected" id=f027cf16d6e07e9952ffb3705de26ad314c3113871095bc5238398f7f5ab89b7 namespace=k8s.io
time="2023-10-30T00:35:57.941302630Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:35:58.054344477Z" level=info msg="StartContainer for \"2d037464af7e6ef63816b46f8c05a0b5343c264c60e48cf8a1df6d7d7e0c5ffb\" returns successfully"
time="2023-10-30T00:35:58.070010563Z" level=info msg="PullImage \"digitalocean/do-csi-plugin:v4.7.0\""
time="2023-10-30T00:35:58.268793058Z" level=info msg="ImageUpdate event name:\"docker.io/digitalocean/do-csi-plugin:v4.7.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:58.275006252Z" level=info msg="stop pulling image docker.io/digitalocean/do-csi-plugin:v4.7.0: active requests=0, bytes read=4427"
time="2023-10-30T00:35:58.299075582Z" level=info msg="ImageUpdate event name:\"sha256:6dbc47b45cd4092c0690c0796bafa6b5440cdc3b2c3493b484212668d0558059\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:58.317209926Z" level=info msg="ImageUpdate event name:\"docker.io/digitalocean/do-csi-plugin:v4.7.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:58.333784998Z" level=info msg="ImageUpdate event name:\"docker.io/digitalocean/do-csi-plugin@sha256:2477277574f3de324e1cc2c5cc19c566d807377b552b6b51c18b4078a60fa185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:58.333985514Z" level=info msg="Pulled image \"digitalocean/do-csi-plugin:v4.7.0\" with image id \"sha256:6dbc47b45cd4092c0690c0796bafa6b5440cdc3b2c3493b484212668d0558059\", repo tag \"docker.io/digitalocean/do-csi-plugin:v4.7.0\", repo digest \"docker.io/digitalocean/do-csi-plugin@sha256:2477277574f3de324e1cc2c5cc19c566d807377b552b6b51c18b4078a60fa185\", size \"32403640\" in 263.915367ms"
time="2023-10-30T00:35:58.334042209Z" level=info msg="PullImage \"digitalocean/do-csi-plugin:v4.7.0\" returns image reference \"sha256:6dbc47b45cd4092c0690c0796bafa6b5440cdc3b2c3493b484212668d0558059\""
time="2023-10-30T00:35:58.336967400Z" level=info msg="CreateContainer within sandbox \"d3c9102d9b7c2d4537482eb12b968038fd20159322cab9ab51c4435c916865c9\" for container &ContainerMetadata{Name:csi-do-plugin,Attempt:0,}"
time="2023-10-30T00:35:58.421219713Z" level=info msg="CreateContainer within sandbox \"d3c9102d9b7c2d4537482eb12b968038fd20159322cab9ab51c4435c916865c9\" for &ContainerMetadata{Name:csi-do-plugin,Attempt:0,} returns container id \"7711f9c4db45ebc5498e8de9b817bbc5c1f06bc68ed530905e628a88283fde18\""
time="2023-10-30T00:35:58.422469398Z" level=info msg="StartContainer for \"7711f9c4db45ebc5498e8de9b817bbc5c1f06bc68ed530905e628a88283fde18\""
time="2023-10-30T00:35:58.599796235Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\""
time="2023-10-30T00:35:58.725356225Z" level=info msg="StartContainer for \"7711f9c4db45ebc5498e8de9b817bbc5c1f06bc68ed530905e628a88283fde18\" returns successfully"
time="2023-10-30T00:35:58.875791943Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:58.883215160Z" level=info msg="stop pulling image docker.io/rancher/mirrored-calico-cni:v3.26.1: active requests=0, bytes read=4431"
time="2023-10-30T00:35:58.897960720Z" level=info msg="ImageUpdate event name:\"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:58.920296684Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:58.933341284Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:35:58.936738647Z" level=info msg="Pulled image \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" with image id \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\", repo tag \"docker.io/rancher/mirrored-calico-cni:v3.26.1\", repo digest \"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\", size \"93375345\" in 336.883267ms"
time="2023-10-30T00:35:58.936824903Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" returns image reference \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\""
time="2023-10-30T00:35:58.943469813Z" level=info msg="CreateContainer within sandbox \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
time="2023-10-30T00:35:58.994492036Z" level=info msg="CreateContainer within sandbox \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3380a7ea795f29e1bde26ddc2cfedacf8ea04dc3eef95981a12d5464dd58a31f\""
time="2023-10-30T00:35:58.995517141Z" level=info msg="StartContainer for \"3380a7ea795f29e1bde26ddc2cfedacf8ea04dc3eef95981a12d5464dd58a31f\""
time="2023-10-30T00:35:59.446995387Z" level=info msg="StartContainer for \"3380a7ea795f29e1bde26ddc2cfedacf8ea04dc3eef95981a12d5464dd58a31f\" returns successfully"
time="2023-10-30T00:36:30.820589291Z" level=info msg="shim disconnected" id=3380a7ea795f29e1bde26ddc2cfedacf8ea04dc3eef95981a12d5464dd58a31f namespace=k8s.io
time="2023-10-30T00:36:30.820675258Z" level=warning msg="cleaning up after shim disconnected" id=3380a7ea795f29e1bde26ddc2cfedacf8ea04dc3eef95981a12d5464dd58a31f namespace=k8s.io
time="2023-10-30T00:36:30.820689745Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:36:31.756456659Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\""
time="2023-10-30T00:36:31.959622964Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:36:31.962437280Z" level=info msg="stop pulling image docker.io/rancher/mirrored-calico-cni:v3.26.1: active requests=0, bytes read=4430"
time="2023-10-30T00:36:31.965733774Z" level=info msg="ImageUpdate event name:\"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:36:31.970843236Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:36:31.975748905Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:36:31.978098731Z" level=info msg="Pulled image \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" with image id \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\", repo tag \"docker.io/rancher/mirrored-calico-cni:v3.26.1\", repo digest \"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\", size \"93375345\" in 221.578527ms"
time="2023-10-30T00:36:31.978163217Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" returns image reference \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\""
time="2023-10-30T00:36:31.981011014Z" level=info msg="CreateContainer within sandbox \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\" for container &ContainerMetadata{Name:install-cni,Attempt:1,}"
time="2023-10-30T00:36:32.009688552Z" level=info msg="CreateContainer within sandbox \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\" for &ContainerMetadata{Name:install-cni,Attempt:1,} returns container id \"34469eb0bdccf0cbb04b91b13aa1ce3e538b5a6b11859ac4530c0b04eaee07f2\""
time="2023-10-30T00:36:32.010371157Z" level=info msg="StartContainer for \"34469eb0bdccf0cbb04b91b13aa1ce3e538b5a6b11859ac4530c0b04eaee07f2\""
time="2023-10-30T00:36:32.234404051Z" level=info msg="StartContainer for \"34469eb0bdccf0cbb04b91b13aa1ce3e538b5a6b11859ac4530c0b04eaee07f2\" returns successfully"
time="2023-10-30T00:36:52.296907781Z" level=info msg="RemoveContainer for \"bbb8616af6a5e104b8f58d65496befc0d2e3fdcf375d22a32b27bcbde84cefaf\""
time="2023-10-30T00:36:52.318909051Z" level=info msg="RemoveContainer for \"bbb8616af6a5e104b8f58d65496befc0d2e3fdcf375d22a32b27bcbde84cefaf\" returns successfully"
time="2023-10-30T00:36:52.321561503Z" level=info msg="RemoveContainer for \"1da33f152efab4ed94a27de1b211e784002ba565946cd1fd5b2a2393e4b69bff\""
time="2023-10-30T00:36:52.334715220Z" level=info msg="RemoveContainer for \"1da33f152efab4ed94a27de1b211e784002ba565946cd1fd5b2a2393e4b69bff\" returns successfully"
time="2023-10-30T00:36:52.354722212Z" level=info msg="RemoveContainer for \"46afa0022d0b9bf0af4a7c38266bbb7fee9b7a25ce593c6ac39696ab00c1f38b\""
time="2023-10-30T00:36:52.374327832Z" level=info msg="RemoveContainer for \"46afa0022d0b9bf0af4a7c38266bbb7fee9b7a25ce593c6ac39696ab00c1f38b\" returns successfully"
time="2023-10-30T00:36:52.376660817Z" level=info msg="RemoveContainer for \"7e8cea24724232dce1b5018df912c277c6442e7149a1391e8de26792347a0dde\""
time="2023-10-30T00:36:52.388589119Z" level=info msg="RemoveContainer for \"7e8cea24724232dce1b5018df912c277c6442e7149a1391e8de26792347a0dde\" returns successfully"
time="2023-10-30T00:36:52.391834719Z" level=info msg="RemoveContainer for \"ffc8f525831a84153a7be1072b1fb17c25323d7a678de9f610291df725f5e65c\""
time="2023-10-30T00:36:52.403921840Z" level=info msg="RemoveContainer for \"ffc8f525831a84153a7be1072b1fb17c25323d7a678de9f610291df725f5e65c\" returns successfully"
time="2023-10-30T00:36:52.414915816Z" level=info msg="RemoveContainer for \"1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd\""
time="2023-10-30T00:36:52.453782423Z" level=info msg="RemoveContainer for \"1440212adb78a75f50856a4e7dd5c8e5925b4ca0cbde3c93560ea6a8e9c770bd\" returns successfully"
time="2023-10-30T00:36:52.467600139Z" level=info msg="RemoveContainer for \"9e3002a3f57a410241bf6990cb776914c5f3e61a843275911cfae6963c3d5788\""
time="2023-10-30T00:36:52.478689574Z" level=info msg="RemoveContainer for \"9e3002a3f57a410241bf6990cb776914c5f3e61a843275911cfae6963c3d5788\" returns successfully"
time="2023-10-30T00:36:52.484451272Z" level=info msg="RemoveContainer for \"9ae6a9821b8d150c9d3ddcf2fb356d4b993bb163fd1975acbadd869e5383b267\""
time="2023-10-30T00:36:52.494543422Z" level=info msg="RemoveContainer for \"9ae6a9821b8d150c9d3ddcf2fb356d4b993bb163fd1975acbadd869e5383b267\" returns successfully"
time="2023-10-30T00:36:52.497162223Z" level=info msg="RemoveContainer for \"d8943f5461a4bd0f754b857b930348f2f03445c8adfe394fd786f2efa2cac342\""
time="2023-10-30T00:36:52.505356338Z" level=info msg="RemoveContainer for \"d8943f5461a4bd0f754b857b930348f2f03445c8adfe394fd786f2efa2cac342\" returns successfully"
time="2023-10-30T00:36:52.507729946Z" level=info msg="RemoveContainer for \"cdc14627439549cbb3b374b12c9a35a701b98d9315c12473781c928012e930a0\""
time="2023-10-30T00:36:52.532368831Z" level=info msg="RemoveContainer for \"cdc14627439549cbb3b374b12c9a35a701b98d9315c12473781c928012e930a0\" returns successfully"
time="2023-10-30T00:36:52.534276319Z" level=info msg="StopPodSandbox for \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\""
time="2023-10-30T00:36:52.534424555Z" level=info msg="TearDown network for sandbox \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\" successfully"
time="2023-10-30T00:36:52.534485585Z" level=info msg="StopPodSandbox for \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\" returns successfully"
time="2023-10-30T00:36:52.535056683Z" level=info msg="RemovePodSandbox for \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\""
time="2023-10-30T00:36:52.535145977Z" level=info msg="Forcibly stopping sandbox \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\""
time="2023-10-30T00:36:52.535339971Z" level=info msg="TearDown network for sandbox \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\" successfully"
time="2023-10-30T00:36:52.545921499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
time="2023-10-30T00:36:52.546050467Z" level=info msg="RemovePodSandbox \"92c9f15e30cc6c366477f7d21d03e70a91d3a477c677d6a75262e4a79a348bde\" returns successfully"
time="2023-10-30T00:36:52.546692003Z" level=info msg="StopPodSandbox for \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\""
time="2023-10-30T00:36:52.546855950Z" level=info msg="TearDown network for sandbox \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\" successfully"
time="2023-10-30T00:36:52.546969622Z" level=info msg="StopPodSandbox for \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\" returns successfully"
time="2023-10-30T00:36:52.547467324Z" level=info msg="RemovePodSandbox for \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\""
time="2023-10-30T00:36:52.547554681Z" level=info msg="Forcibly stopping sandbox \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\""
time="2023-10-30T00:36:52.547806977Z" level=info msg="TearDown network for sandbox \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\" successfully"
time="2023-10-30T00:36:52.559425542Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
time="2023-10-30T00:36:52.559560046Z" level=info msg="RemovePodSandbox \"4a26177801cda4b171909e330d80178a74b8894af13e25b2687de3cc5d839320\" returns successfully"
time="2023-10-30T00:36:52.560376287Z" level=info msg="StopPodSandbox for \"231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e\""
time="2023-10-30T00:36:54.330077981Z" level=info msg="StopPodSandbox for \"10479ccdafada596340d7a3ff31a79c5187768a37ffd93bd112e16cbe2c4fe28\""
time="2023-10-30T00:36:54.330164441Z" level=info msg="Container to stop \"b3853ee6bb4406e90bce38914f4c5366274643a6df8ac41bde7e4702b8d50d46\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:36:54.330182891Z" level=info msg="Container to stop \"36fc8c10eacb66c052ac723fc044ccee909bda6560a5e616ad0c59769adfa952\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:36:54.330207234Z" level=info msg="Container to stop \"db04acc4e1e42a3ffbd363b202620f956a91e39b7f3d8bd3e4da34a6aa5a0d73\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:37:02.888805868Z" level=info msg="shim disconnected" id=34469eb0bdccf0cbb04b91b13aa1ce3e538b5a6b11859ac4530c0b04eaee07f2 namespace=k8s.io
time="2023-10-30T00:37:02.888880123Z" level=warning msg="cleaning up after shim disconnected" id=34469eb0bdccf0cbb04b91b13aa1ce3e538b5a6b11859ac4530c0b04eaee07f2 namespace=k8s.io
time="2023-10-30T00:37:02.888902142Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:37:03.859599364Z" level=info msg="RemoveContainer for \"3380a7ea795f29e1bde26ddc2cfedacf8ea04dc3eef95981a12d5464dd58a31f\""
time="2023-10-30T00:37:03.860332234Z" level=info msg="RemoveContainer for \"3380a7ea795f29e1bde26ddc2cfedacf8ea04dc3eef95981a12d5464dd58a31f\""
time="2023-10-30T00:37:03.860524927Z" level=error msg="RemoveContainer for \"3380a7ea795f29e1bde26ddc2cfedacf8ea04dc3eef95981a12d5464dd58a31f\" failed" error="failed to set removing state for container \"3380a7ea795f29e1bde26ddc2cfedacf8ea04dc3eef95981a12d5464dd58a31f\": container is already in removing state"
time="2023-10-30T00:37:03.879127085Z" level=info msg="RemoveContainer for \"3380a7ea795f29e1bde26ddc2cfedacf8ea04dc3eef95981a12d5464dd58a31f\" returns successfully"
time="2023-10-30T00:37:17.513890932Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\""
time="2023-10-30T00:37:17.718034872Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:37:17.721794646Z" level=info msg="stop pulling image docker.io/rancher/mirrored-calico-cni:v3.26.1: active requests=0, bytes read=4431"
time="2023-10-30T00:37:17.726484121Z" level=info msg="ImageUpdate event name:\"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:37:17.732532708Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:37:17.738692144Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:37:17.740582226Z" level=info msg="Pulled image \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" with image id \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\", repo tag \"docker.io/rancher/mirrored-calico-cni:v3.26.1\", repo digest \"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\", size \"93375345\" in 226.604887ms"
time="2023-10-30T00:37:17.740737013Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" returns image reference \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\""
time="2023-10-30T00:37:17.744084956Z" level=info msg="CreateContainer within sandbox \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\" for container &ContainerMetadata{Name:install-cni,Attempt:2,}"
time="2023-10-30T00:37:17.786679271Z" level=info msg="CreateContainer within sandbox \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\" for &ContainerMetadata{Name:install-cni,Attempt:2,} returns container id \"d45fccee1c9ecd130dea9988da3136539660b5f21a89321d8e2856534596ed67\""
time="2023-10-30T00:37:17.789882234Z" level=info msg="StartContainer for \"d45fccee1c9ecd130dea9988da3136539660b5f21a89321d8e2856534596ed67\""
time="2023-10-30T00:37:18.002233839Z" level=info msg="StartContainer for \"d45fccee1c9ecd130dea9988da3136539660b5f21a89321d8e2856534596ed67\" returns successfully"
time="2023-10-30T00:37:22.641040879Z" level=error msg="StopPodSandbox for \"231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e\" failed" error="failed to destroy network for sandbox \"231a1638104fe7d7837b0d8cc49a0479becc687f1c81f97307a2008536dc0c2e\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T00:37:22.641921420Z" level=info msg="StopPodSandbox for \"9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211\""
time="2023-10-30T00:37:24.399915691Z" level=error msg="StopPodSandbox for \"10479ccdafada596340d7a3ff31a79c5187768a37ffd93bd112e16cbe2c4fe28\" failed" error="failed to destroy network for sandbox \"10479ccdafada596340d7a3ff31a79c5187768a37ffd93bd112e16cbe2c4fe28\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T00:37:24.923372937Z" level=info msg="StopPodSandbox for \"10479ccdafada596340d7a3ff31a79c5187768a37ffd93bd112e16cbe2c4fe28\""
time="2023-10-30T00:37:24.923528804Z" level=info msg="Container to stop \"db04acc4e1e42a3ffbd363b202620f956a91e39b7f3d8bd3e4da34a6aa5a0d73\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:37:24.923561367Z" level=info msg="Container to stop \"b3853ee6bb4406e90bce38914f4c5366274643a6df8ac41bde7e4702b8d50d46\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:37:24.923579071Z" level=info msg="Container to stop \"36fc8c10eacb66c052ac723fc044ccee909bda6560a5e616ad0c59769adfa952\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:37:48.579518334Z" level=info msg="shim disconnected" id=d45fccee1c9ecd130dea9988da3136539660b5f21a89321d8e2856534596ed67 namespace=k8s.io
time="2023-10-30T00:37:48.579601227Z" level=warning msg="cleaning up after shim disconnected" id=d45fccee1c9ecd130dea9988da3136539660b5f21a89321d8e2856534596ed67 namespace=k8s.io
time="2023-10-30T00:37:48.579617061Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:37:49.038215308Z" level=info msg="RemoveContainer for \"34469eb0bdccf0cbb04b91b13aa1ce3e538b5a6b11859ac4530c0b04eaee07f2\""
time="2023-10-30T00:37:49.039812521Z" level=info msg="RemoveContainer for \"34469eb0bdccf0cbb04b91b13aa1ce3e538b5a6b11859ac4530c0b04eaee07f2\""
time="2023-10-30T00:37:49.039989731Z" level=error msg="RemoveContainer for \"34469eb0bdccf0cbb04b91b13aa1ce3e538b5a6b11859ac4530c0b04eaee07f2\" failed" error="failed to set removing state for container \"34469eb0bdccf0cbb04b91b13aa1ce3e538b5a6b11859ac4530c0b04eaee07f2\": container is already in removing state"
time="2023-10-30T00:37:49.061879807Z" level=info msg="RemoveContainer for \"34469eb0bdccf0cbb04b91b13aa1ce3e538b5a6b11859ac4530c0b04eaee07f2\" returns successfully"
time="2023-10-30T00:37:52.709231105Z" level=error msg="StopPodSandbox for \"9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211\" failed" error="failed to destroy network for sandbox \"9f07d78e251909bded0e30d3988836526d13850872634aacfb9bae1472d27211\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T00:37:52.710719410Z" level=info msg="StopPodSandbox for \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\""
time="2023-10-30T00:37:52.713316275Z" level=info msg="TearDown network for sandbox \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\" successfully"
time="2023-10-30T00:37:52.713658128Z" level=info msg="StopPodSandbox for \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\" returns successfully"
time="2023-10-30T00:37:52.715434993Z" level=info msg="RemovePodSandbox for \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\""
time="2023-10-30T00:37:52.715499536Z" level=info msg="Forcibly stopping sandbox \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\""
time="2023-10-30T00:37:52.715640746Z" level=info msg="TearDown network for sandbox \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\" successfully"
time="2023-10-30T00:37:52.741232921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
time="2023-10-30T00:37:52.741368363Z" level=info msg="RemovePodSandbox \"25209f19403a9d353fcfcb2e489193a204daaef6efea2c90f3295e62780f33c3\" returns successfully"
time="2023-10-30T00:37:52.742124665Z" level=info msg="StopPodSandbox for \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\""
time="2023-10-30T00:37:52.742280343Z" level=info msg="TearDown network for sandbox \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\" successfully"
time="2023-10-30T00:37:52.742342042Z" level=info msg="StopPodSandbox for \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\" returns successfully"
time="2023-10-30T00:37:52.742746406Z" level=info msg="RemovePodSandbox for \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\""
time="2023-10-30T00:37:52.742785807Z" level=info msg="Forcibly stopping sandbox \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\""
time="2023-10-30T00:37:52.742905031Z" level=info msg="TearDown network for sandbox \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\" successfully"
time="2023-10-30T00:37:52.754602606Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
time="2023-10-30T00:37:52.762884799Z" level=info msg="RemovePodSandbox \"4633cbf36b3536c47d74043b1b5dc1302b02bac29ec04ef0614de8cc1c475b89\" returns successfully"
time="2023-10-30T00:37:55.009989803Z" level=error msg="StopPodSandbox for \"10479ccdafada596340d7a3ff31a79c5187768a37ffd93bd112e16cbe2c4fe28\" failed" error="failed to destroy network for sandbox \"10479ccdafada596340d7a3ff31a79c5187768a37ffd93bd112e16cbe2c4fe28\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T00:37:55.047748711Z" level=info msg="StopPodSandbox for \"10479ccdafada596340d7a3ff31a79c5187768a37ffd93bd112e16cbe2c4fe28\""
time="2023-10-30T00:37:55.047856521Z" level=info msg="Container to stop \"db04acc4e1e42a3ffbd363b202620f956a91e39b7f3d8bd3e4da34a6aa5a0d73\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:37:55.047877012Z" level=info msg="Container to stop \"36fc8c10eacb66c052ac723fc044ccee909bda6560a5e616ad0c59769adfa952\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:37:55.047921391Z" level=info msg="Container to stop \"b3853ee6bb4406e90bce38914f4c5366274643a6df8ac41bde7e4702b8d50d46\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:38:14.513106233Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\""
time="2023-10-30T00:38:14.747244381Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:38:14.770548636Z" level=info msg="stop pulling image docker.io/rancher/mirrored-calico-cni:v3.26.1: active requests=0, bytes read=4431"
time="2023-10-30T00:38:14.775604767Z" level=info msg="ImageUpdate event name:\"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:38:14.794246560Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:38:14.800638658Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T00:38:14.804572395Z" level=info msg="Pulled image \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" with image id \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\", repo tag \"docker.io/rancher/mirrored-calico-cni:v3.26.1\", repo digest \"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\", size \"93375345\" in 291.407212ms"
time="2023-10-30T00:38:14.804643960Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" returns image reference \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\""
time="2023-10-30T00:38:14.814173021Z" level=info msg="CreateContainer within sandbox \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\" for container &ContainerMetadata{Name:install-cni,Attempt:3,}"
time="2023-10-30T00:38:14.874366245Z" level=info msg="CreateContainer within sandbox \"0d9a78d0778c0a32239b3e339c9c220283a51306698003e01b9577b6d6f15378\" for &ContainerMetadata{Name:install-cni,Attempt:3,} returns container id \"58798b64090523d040c0628c2bc4eef8862fb5ad9bebc06ebf3357ebdfd664e5\""
time="2023-10-30T00:38:14.887296585Z" level=info msg="StartContainer for \"58798b64090523d040c0628c2bc4eef8862fb5ad9bebc06ebf3357ebdfd664e5\""
time="2023-10-30T00:38:15.290602520Z" level=info msg="StartContainer for \"58798b64090523d040c0628c2bc4eef8862fb5ad9bebc06ebf3357ebdfd664e5\" returns successfully"
time="2023-10-30T00:38:25.110628785Z" level=error msg="StopPodSandbox for \"10479ccdafada596340d7a3ff31a79c5187768a37ffd93bd112e16cbe2c4fe28\" failed" error="failed to destroy network for sandbox \"10479ccdafada596340d7a3ff31a79c5187768a37ffd93bd112e16cbe2c4fe28\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T00:38:25.157225009Z" level=info msg="StopPodSandbox for \"10479ccdafada596340d7a3ff31a79c5187768a37ffd93bd112e16cbe2c4fe28\""
time="2023-10-30T00:38:25.157325791Z" level=info msg="Container to stop \"db04acc4e1e42a3ffbd363b202620f956a91e39b7f3d8bd3e4da34a6aa5a0d73\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:38:25.157345323Z" level=info msg="Container to stop \"b3853ee6bb4406e90bce38914f4c5366274643a6df8ac41bde7e4702b8d50d46\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:38:25.157362483Z" level=info msg="Container to stop \"36fc8c10eacb66c052ac723fc044ccee909bda6560a5e616ad0c59769adfa952\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T00:38:46.197578219Z" level=info msg="shim disconnected" id=58798b64090523d040c0628c2bc4eef8862fb5ad9bebc06ebf3357ebdfd664e5 namespace=k8s.io
time="2023-10-30T00:38:46.197668027Z" level=warning msg="cleaning up after shim disconnected" id=58798b64090523d040c0628c2bc4eef8862fb5ad9bebc06ebf3357ebdfd664e5 namespace=k8s.io
time="2023-10-30T00:38:46.197682752Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T00:38:47.223382214Z" level=info msg="RemoveContainer for \"d45fccee1c9ecd130dea9988da3136539660b5f21a89321d8e2856534596ed67\""
time="2023-10-30T00:38:47.224344066Z" level=info msg="RemoveContainer for \"d45fccee1c9ecd130dea9988da3136539660b5f21a89321d8e2856534596ed67\""
time="2023-10-30T00:38:47.225235610Z" level=error msg="RemoveContainer for \"d45fccee1c9ecd130dea9988da3136539660b5f21a89321d8e2856534596ed67\" failed" error="failed to set removing state for container \"d45fccee1c9ecd130dea9988da3136539660b5f21a89321d8e2856534596ed67\": container is already in removing state"

after that calico is just never comming back.

i dont actually know what is to blame. these are specific nodegroups with specific load types so running out of memory or cpu should be out of the question. There is a scheduled kured reboot, but its configured to be on Sunday not monday night.

Im currently investigating if using calico and bgp will allow me to deactivate the kube-proxy all together. but i had tried that during initial setup and could not get it working.

Other notworthy things:

  • deleting the kube-proxy using lens and restarting the rke2-agent only worked on one of the two nodes

ContainerD logs for the one where it did not work:

time="2023-10-30T08:17:57.993301634Z" level=info msg="RemoveContainer for \"53aeb58c82fa9e3f0aecfd21e196bdbbc1592ca14373788adb7d19f629e3c7e8\" returns successfully"
time="2023-10-30T08:18:00.961181791Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:18:00.985615277Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:18:00.985709959Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:18:00.985730597Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:18:00.985746227Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:18:31.028470687Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:18:31.051122089Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:18:31.051211142Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:18:31.051248420Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:18:31.051260520Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:18:51.021512504Z" level=info msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\""
time="2023-10-30T08:19:01.097631162Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:19:01.114427012Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:19:01.114512514Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:19:01.114536061Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:19:01.114551839Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:19:21.090827740Z" level=error msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\" failed" error="failed to destroy network for sandbox \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:19:21.091685210Z" level=info msg="StopPodSandbox for \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\""
time="2023-10-30T08:19:31.154506181Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:19:31.177979780Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:19:31.178079242Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:19:31.178105958Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:19:31.178130157Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:19:51.159210365Z" level=error msg="StopPodSandbox for \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\" failed" error="failed to destroy network for sandbox \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:20:01.218955529Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:20:01.248137247Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:20:01.248243003Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:20:01.248271934Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:20:01.248288657Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:20:31.313773905Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:20:32.315094578Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:20:32.315195079Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:20:32.315214450Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:20:32.315230773Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:20:51.167218422Z" level=info msg="StopPodSandbox for \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\""
time="2023-10-30T08:21:02.357083206Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:21:02.382256913Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:21:02.382346955Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:21:02.382366052Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:21:02.382382610Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:21:21.211736713Z" level=error msg="StopPodSandbox for \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\" failed" error="failed to destroy network for sandbox \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:21:21.212592090Z" level=info msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\""
time="2023-10-30T08:21:32.437720112Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:21:32.447904243Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:21:32.448013263Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:21:32.448037467Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:21:32.448085413Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:21:51.283368298Z" level=error msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\" failed" error="failed to destroy network for sandbox \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:22:02.486485938Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:22:02.515776092Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:22:02.515879211Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:22:02.515908510Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:22:02.515939205Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:22:32.561052722Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:22:32.585646231Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:22:32.585722208Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:22:32.585739111Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:22:32.585751289Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:22:51.286551909Z" level=info msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\""
time="2023-10-30T08:23:02.634116135Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:23:02.647920622Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:23:02.648009340Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:23:02.648030266Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:23:02.648047889Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:23:03.784252843Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\""
time="2023-10-30T08:23:03.994066794Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:23:03.998690741Z" level=info msg="stop pulling image docker.io/rancher/mirrored-calico-cni:v3.26.1: active requests=0, bytes read=4431"
time="2023-10-30T08:23:04.005091690Z" level=info msg="ImageUpdate event name:\"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:23:04.011412086Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:23:04.019364572Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:23:04.022123546Z" level=info msg="Pulled image \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" with image id \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\", repo tag \"docker.io/rancher/mirrored-calico-cni:v3.26.1\", repo digest \"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\", size \"93375345\" in 237.815717ms"
time="2023-10-30T08:23:04.022211110Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" returns image reference \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\""
time="2023-10-30T08:23:04.035094354Z" level=info msg="CreateContainer within sandbox \"6bbbb5f1cc52fcf4cfdc9a30b10841407063e2940d1f28c0267a5416c6829d71\" for container &ContainerMetadata{Name:install-cni,Attempt:89,}"
time="2023-10-30T08:23:04.107491515Z" level=info msg="CreateContainer within sandbox \"6bbbb5f1cc52fcf4cfdc9a30b10841407063e2940d1f28c0267a5416c6829d71\" for &ContainerMetadata{Name:install-cni,Attempt:89,} returns container id \"2cc67728cab446fa29ba03c177bf4dfcf42610127c38055af5a0787bebe056ae\""
time="2023-10-30T08:23:04.108243086Z" level=info msg="StartContainer for \"2cc67728cab446fa29ba03c177bf4dfcf42610127c38055af5a0787bebe056ae\""
time="2023-10-30T08:23:04.319555504Z" level=info msg="StartContainer for \"2cc67728cab446fa29ba03c177bf4dfcf42610127c38055af5a0787bebe056ae\" returns successfully"
time="2023-10-30T08:23:21.330053569Z" level=error msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\" failed" error="failed to destroy network for sandbox \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:23:21.330748283Z" level=info msg="StopPodSandbox for \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\""
time="2023-10-30T08:23:32.689931685Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:23:32.712699544Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:23:32.712856400Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:23:32.712884956Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:23:32.712901505Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:23:34.865080486Z" level=info msg="shim disconnected" id=2cc67728cab446fa29ba03c177bf4dfcf42610127c38055af5a0787bebe056ae namespace=k8s.io
time="2023-10-30T08:23:34.865140497Z" level=warning msg="cleaning up after shim disconnected" id=2cc67728cab446fa29ba03c177bf4dfcf42610127c38055af5a0787bebe056ae namespace=k8s.io
time="2023-10-30T08:23:34.865152120Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T08:23:35.722456320Z" level=info msg="RemoveContainer for \"da3157a710ca1e903cc43cc9c86a04846921616632e1dd719f863581acbb9e56\""
time="2023-10-30T08:23:35.722628765Z" level=info msg="RemoveContainer for \"da3157a710ca1e903cc43cc9c86a04846921616632e1dd719f863581acbb9e56\""
time="2023-10-30T08:23:35.722746215Z" level=error msg="RemoveContainer for \"da3157a710ca1e903cc43cc9c86a04846921616632e1dd719f863581acbb9e56\" failed" error="failed to set removing state for container \"da3157a710ca1e903cc43cc9c86a04846921616632e1dd719f863581acbb9e56\": container is already in removing state"
time="2023-10-30T08:23:35.739470978Z" level=info msg="RemoveContainer for \"da3157a710ca1e903cc43cc9c86a04846921616632e1dd719f863581acbb9e56\" returns successfully"
time="2023-10-30T08:23:51.385625531Z" level=error msg="StopPodSandbox for \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\" failed" error="failed to destroy network for sandbox \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:24:02.759804448Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:24:02.782012985Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:24:02.782113446Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:24:02.782136274Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:24:02.782153189Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:24:32.818251527Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:24:32.847908709Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:24:32.847975954Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:24:32.847988168Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:24:32.848000874Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:24:51.389308836Z" level=info msg="StopPodSandbox for \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\""
time="2023-10-30T08:25:02.884088201Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:25:02.914368834Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:25:02.914447961Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:25:02.914467336Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:25:02.914486393Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:25:21.429082156Z" level=error msg="StopPodSandbox for \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\" failed" error="failed to destroy network for sandbox \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:25:21.429907266Z" level=info msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\""
time="2023-10-30T08:25:32.961412541Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:25:32.988862886Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:25:32.988946036Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:25:32.988972065Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:25:32.988992676Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:25:51.474781986Z" level=error msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\" failed" error="failed to destroy network for sandbox \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:26:03.030110662Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:26:03.056926442Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:26:03.057027933Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:26:03.057053266Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:26:03.057069254Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:26:18.558946676Z" level=info msg="starting containerd" revision=383ce4e834e4d2ae5e1869475379e70618bdcc33 version=v1.7.3-k3s1
time="2023-10-30T08:26:18.584129913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T08:26:18.589913516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /usr/lib/modules/6.5.8-1-default\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2023-10-30T08:26:18.590079227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T08:26:18.590759071Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2023-10-30T08:26:18.590899284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T08:26:18.590961049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T08:26:18.591170162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.fuse-overlayfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T08:26:18.591254168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.stargz\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T08:26:18.592096655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T08:26:18.592201410Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2023-10-30T08:26:18.592240240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2023-10-30T08:26:18.592687413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2023-10-30T08:26:18.592811656Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2023-10-30T08:26:18.592903934Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2023-10-30T08:26:18.592951335Z" level=info msg="metadata content store policy set" policy=shared
time="2023-10-30T08:26:18.594719792Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2023-10-30T08:26:18.594824826Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
time="2023-10-30T08:26:18.594873396Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2023-10-30T08:26:18.594939800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
time="2023-10-30T08:26:18.594997166Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
time="2023-10-30T08:26:18.595032916Z" level=info msg="NRI interface is disabled by configuration."
time="2023-10-30T08:26:18.595070385Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2023-10-30T08:26:18.795790178Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
time="2023-10-30T08:26:18.795838973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
time="2023-10-30T08:26:18.795869651Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
time="2023-10-30T08:26:18.795891078Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
time="2023-10-30T08:26:18.795910555Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2023-10-30T08:26:18.795943802Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2023-10-30T08:26:18.796022536Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2023-10-30T08:26:18.796056130Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2023-10-30T08:26:18.796080024Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2023-10-30T08:26:18.796101546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2023-10-30T08:26:18.796123998Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2023-10-30T08:26:18.796140059Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2023-10-30T08:26:18.796224876Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
time="2023-10-30T08:26:18.796704774Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
time="2023-10-30T08:26:18.796805912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.796831011Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
time="2023-10-30T08:26:18.796863817Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
time="2023-10-30T08:26:18.796952560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.796985236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797004698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797024229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797041214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797062033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797080382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797095943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797131844Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
time="2023-10-30T08:26:18.797203040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797225529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797244900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797261098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797285005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797303506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797345961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797366709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
time="2023-10-30T08:26:18.797695637Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:index.docker.io/rancher/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:true EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:1m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/rancher/rke2/agent/containerd ContainerdEndpoint:/run/k3s/containerd/containerd.sock RootDir:/var/lib/rancher/rke2/agent/containerd/io.containerd.grpc.v1.cri StateDir:/run/k3s/containerd/io.containerd.grpc.v1.cri}"
time="2023-10-30T08:26:18.797785657Z" level=info msg="Connect containerd service"
time="2023-10-30T08:26:18.797831315Z" level=info msg="using legacy CRI server"
time="2023-10-30T08:26:18.797843821Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
time="2023-10-30T08:26:18.797895689Z" level=info msg="Get image filesystem path \"/var/lib/rancher/rke2/agent/containerd/io.containerd.snapshotter.v1.overlayfs\""
time="2023-10-30T08:26:18.798740622Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
time="2023-10-30T08:26:18.798776082Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
time="2023-10-30T08:26:18.798797145Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
time="2023-10-30T08:26:18.798817307Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
time="2023-10-30T08:26:18.798839763Z" level=info msg="Start subscribing containerd event"
time="2023-10-30T08:26:18.798917816Z" level=info msg="Start recovering state"
time="2023-10-30T08:26:18.799347839Z" level=info msg=serving... address=/run/k3s/containerd/containerd.sock.ttrpc
time="2023-10-30T08:26:18.799424447Z" level=info msg=serving... address=/run/k3s/containerd/containerd.sock
time="2023-10-30T08:26:18.915404743Z" level=info msg="Start event monitor"
time="2023-10-30T08:26:18.915450473Z" level=info msg="Start snapshots syncer"
time="2023-10-30T08:26:18.915462451Z" level=info msg="Start cni network conf syncer for default"
time="2023-10-30T08:26:18.915469703Z" level=info msg="Start streaming server"
time="2023-10-30T08:26:18.915518298Z" level=info msg="containerd successfully booted in 0.357859s"
time="2023-10-30T08:26:19.980436224Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
time="2023-10-30T08:26:25.782672636Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\""
time="2023-10-30T08:26:26.005199851Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:26:26.007611640Z" level=info msg="stop pulling image docker.io/rancher/mirrored-calico-cni:v3.26.1: active requests=0, bytes read=4431"
time="2023-10-30T08:26:26.010663183Z" level=info msg="ImageUpdate event name:\"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:26:26.015829015Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:26:26.021796276Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:26:26.023988190Z" level=info msg="Pulled image \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" with image id \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\", repo tag \"docker.io/rancher/mirrored-calico-cni:v3.26.1\", repo digest \"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\", size \"93375345\" in 241.250026ms"
time="2023-10-30T08:26:26.024071503Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" returns image reference \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\""
time="2023-10-30T08:26:26.027303813Z" level=info msg="CreateContainer within sandbox \"6bbbb5f1cc52fcf4cfdc9a30b10841407063e2940d1f28c0267a5416c6829d71\" for container &ContainerMetadata{Name:install-cni,Attempt:90,}"
time="2023-10-30T08:26:26.060473595Z" level=info msg="CreateContainer within sandbox \"6bbbb5f1cc52fcf4cfdc9a30b10841407063e2940d1f28c0267a5416c6829d71\" for &ContainerMetadata{Name:install-cni,Attempt:90,} returns container id \"f45093fd5bb94ec8288bd89b5e516b149a7026accb8128b33feb9307436de178\""
time="2023-10-30T08:26:26.061518188Z" level=info msg="StartContainer for \"f45093fd5bb94ec8288bd89b5e516b149a7026accb8128b33feb9307436de178\""
time="2023-10-30T08:26:26.247302431Z" level=info msg="StartContainer for \"f45093fd5bb94ec8288bd89b5e516b149a7026accb8128b33feb9307436de178\" returns successfully"
time="2023-10-30T08:26:56.649771890Z" level=info msg="shim disconnected" id=f45093fd5bb94ec8288bd89b5e516b149a7026accb8128b33feb9307436de178 namespace=k8s.io
time="2023-10-30T08:26:56.649946551Z" level=warning msg="cleaning up after shim disconnected" id=f45093fd5bb94ec8288bd89b5e516b149a7026accb8128b33feb9307436de178 namespace=k8s.io
time="2023-10-30T08:26:56.649979677Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T08:26:56.980672346Z" level=info msg="RemoveContainer for \"2cc67728cab446fa29ba03c177bf4dfcf42610127c38055af5a0787bebe056ae\""
time="2023-10-30T08:26:56.980682327Z" level=info msg="RemoveContainer for \"2cc67728cab446fa29ba03c177bf4dfcf42610127c38055af5a0787bebe056ae\""
time="2023-10-30T08:26:56.981037534Z" level=error msg="RemoveContainer for \"2cc67728cab446fa29ba03c177bf4dfcf42610127c38055af5a0787bebe056ae\" failed" error="failed to set removing state for container \"2cc67728cab446fa29ba03c177bf4dfcf42610127c38055af5a0787bebe056ae\": container is already in removing state"
time="2023-10-30T08:26:56.989961786Z" level=info msg="RemoveContainer for \"2cc67728cab446fa29ba03c177bf4dfcf42610127c38055af5a0787bebe056ae\" returns successfully"
time="2023-10-30T08:27:10.881413139Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\""
time="2023-10-30T08:27:11.125558124Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:27:11.128049591Z" level=info msg="stop pulling image docker.io/rancher/mirrored-calico-cni:v3.26.1: active requests=0, bytes read=4431"
time="2023-10-30T08:27:11.133461522Z" level=info msg="ImageUpdate event name:\"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:27:11.138206897Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:27:11.142855993Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:27:11.145215550Z" level=info msg="Pulled image \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" with image id \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\", repo tag \"docker.io/rancher/mirrored-calico-cni:v3.26.1\", repo digest \"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\", size \"93375345\" in 263.650834ms"
time="2023-10-30T08:27:11.145275513Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" returns image reference \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\""
time="2023-10-30T08:27:11.147648676Z" level=info msg="CreateContainer within sandbox \"6bbbb5f1cc52fcf4cfdc9a30b10841407063e2940d1f28c0267a5416c6829d71\" for container &ContainerMetadata{Name:install-cni,Attempt:91,}"
time="2023-10-30T08:27:11.180510703Z" level=info msg="CreateContainer within sandbox \"6bbbb5f1cc52fcf4cfdc9a30b10841407063e2940d1f28c0267a5416c6829d71\" for &ContainerMetadata{Name:install-cni,Attempt:91,} returns container id \"fe89c7082cbea0e01b853c49d00b14e3555d14bf9b2162973fd7e2dd17f2b934\""
time="2023-10-30T08:27:11.181286196Z" level=info msg="StartContainer for \"fe89c7082cbea0e01b853c49d00b14e3555d14bf9b2162973fd7e2dd17f2b934\""
time="2023-10-30T08:27:11.333657460Z" level=info msg="StartContainer for \"fe89c7082cbea0e01b853c49d00b14e3555d14bf9b2162973fd7e2dd17f2b934\" returns successfully"
time="2023-10-30T08:27:19.782598648Z" level=info msg="RemoveContainer for \"9fcc1de07c76330e9e15200231a92b19cda11860783e866fd8f0a999808bb729\""
time="2023-10-30T08:27:19.794766866Z" level=info msg="RemoveContainer for \"9fcc1de07c76330e9e15200231a92b19cda11860783e866fd8f0a999808bb729\" returns successfully"
time="2023-10-30T08:27:19.796414176Z" level=info msg="StopPodSandbox for \"7f82d0be189982bd02a992e1d3f5e52ea0032d064d38ab9a44760aa1ca46d9f7\""
time="2023-10-30T08:27:19.796573409Z" level=info msg="TearDown network for sandbox \"7f82d0be189982bd02a992e1d3f5e52ea0032d064d38ab9a44760aa1ca46d9f7\" successfully"
time="2023-10-30T08:27:19.796636840Z" level=info msg="StopPodSandbox for \"7f82d0be189982bd02a992e1d3f5e52ea0032d064d38ab9a44760aa1ca46d9f7\" returns successfully"
time="2023-10-30T08:27:19.797205747Z" level=info msg="RemovePodSandbox for \"7f82d0be189982bd02a992e1d3f5e52ea0032d064d38ab9a44760aa1ca46d9f7\""
time="2023-10-30T08:27:19.797247524Z" level=info msg="Forcibly stopping sandbox \"7f82d0be189982bd02a992e1d3f5e52ea0032d064d38ab9a44760aa1ca46d9f7\""
time="2023-10-30T08:27:19.797378491Z" level=info msg="TearDown network for sandbox \"7f82d0be189982bd02a992e1d3f5e52ea0032d064d38ab9a44760aa1ca46d9f7\" successfully"
time="2023-10-30T08:27:19.808245557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f82d0be189982bd02a992e1d3f5e52ea0032d064d38ab9a44760aa1ca46d9f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
time="2023-10-30T08:27:19.808353711Z" level=info msg="RemovePodSandbox \"7f82d0be189982bd02a992e1d3f5e52ea0032d064d38ab9a44760aa1ca46d9f7\" returns successfully"
time="2023-10-30T08:27:19.809121820Z" level=info msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\""
time="2023-10-30T08:27:41.788437042Z" level=info msg="shim disconnected" id=fe89c7082cbea0e01b853c49d00b14e3555d14bf9b2162973fd7e2dd17f2b934 namespace=k8s.io
time="2023-10-30T08:27:41.788515485Z" level=warning msg="cleaning up after shim disconnected" id=fe89c7082cbea0e01b853c49d00b14e3555d14bf9b2162973fd7e2dd17f2b934 namespace=k8s.io
time="2023-10-30T08:27:41.788526712Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T08:27:42.080028558Z" level=info msg="RemoveContainer for \"f45093fd5bb94ec8288bd89b5e516b149a7026accb8128b33feb9307436de178\""
time="2023-10-30T08:27:42.080797093Z" level=info msg="RemoveContainer for \"f45093fd5bb94ec8288bd89b5e516b149a7026accb8128b33feb9307436de178\""
time="2023-10-30T08:27:42.080973746Z" level=error msg="RemoveContainer for \"f45093fd5bb94ec8288bd89b5e516b149a7026accb8128b33feb9307436de178\" failed" error="failed to set removing state for container \"f45093fd5bb94ec8288bd89b5e516b149a7026accb8128b33feb9307436de178\": container is already in removing state"
time="2023-10-30T08:27:42.090518534Z" level=info msg="RemoveContainer for \"f45093fd5bb94ec8288bd89b5e516b149a7026accb8128b33feb9307436de178\" returns successfully"
time="2023-10-30T08:27:49.859924623Z" level=error msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\" failed" error="failed to destroy network for sandbox \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:27:49.860959540Z" level=info msg="StopPodSandbox for \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\""
time="2023-10-30T08:28:10.881066917Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\""
time="2023-10-30T08:28:11.103004611Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:28:11.106474890Z" level=info msg="stop pulling image docker.io/rancher/mirrored-calico-cni:v3.26.1: active requests=0, bytes read=4430"
time="2023-10-30T08:28:11.110355778Z" level=info msg="ImageUpdate event name:\"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:28:11.115302445Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:28:11.121963229Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:28:11.124287155Z" level=info msg="Pulled image \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" with image id \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\", repo tag \"docker.io/rancher/mirrored-calico-cni:v3.26.1\", repo digest \"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\", size \"93375345\" in 243.152263ms"
time="2023-10-30T08:28:11.124358216Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" returns image reference \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\""
time="2023-10-30T08:28:11.126893882Z" level=info msg="CreateContainer within sandbox \"6bbbb5f1cc52fcf4cfdc9a30b10841407063e2940d1f28c0267a5416c6829d71\" for container &ContainerMetadata{Name:install-cni,Attempt:92,}"
time="2023-10-30T08:28:11.162117654Z" level=info msg="CreateContainer within sandbox \"6bbbb5f1cc52fcf4cfdc9a30b10841407063e2940d1f28c0267a5416c6829d71\" for &ContainerMetadata{Name:install-cni,Attempt:92,} returns container id \"62b8f47940b09188731b537eca1ca1c7d9a735e4b42684bf6eca084bf0012918\""
time="2023-10-30T08:28:11.162917323Z" level=info msg="StartContainer for \"62b8f47940b09188731b537eca1ca1c7d9a735e4b42684bf6eca084bf0012918\""
time="2023-10-30T08:28:11.350049234Z" level=info msg="StartContainer for \"62b8f47940b09188731b537eca1ca1c7d9a735e4b42684bf6eca084bf0012918\" returns successfully"
time="2023-10-30T08:28:19.907443101Z" level=error msg="StopPodSandbox for \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\" failed" error="failed to destroy network for sandbox \"85099a542a73069471d2ac6430d4fd9cc27b2e9856123393f6c53c7da94b35b3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:28:24.474518207Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:28:24.474784684Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:28:24.474840988Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:28:24.474876267Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:28:41.875611779Z" level=info msg="shim disconnected" id=62b8f47940b09188731b537eca1ca1c7d9a735e4b42684bf6eca084bf0012918 namespace=k8s.io
time="2023-10-30T08:28:41.875687313Z" level=warning msg="cleaning up after shim disconnected" id=62b8f47940b09188731b537eca1ca1c7d9a735e4b42684bf6eca084bf0012918 namespace=k8s.io
time="2023-10-30T08:28:41.875698806Z" level=info msg="cleaning up dead shim" namespace=k8s.io
time="2023-10-30T08:28:42.217611589Z" level=info msg="RemoveContainer for \"fe89c7082cbea0e01b853c49d00b14e3555d14bf9b2162973fd7e2dd17f2b934\""
time="2023-10-30T08:28:42.218094026Z" level=info msg="RemoveContainer for \"fe89c7082cbea0e01b853c49d00b14e3555d14bf9b2162973fd7e2dd17f2b934\""
time="2023-10-30T08:28:42.218240747Z" level=error msg="RemoveContainer for \"fe89c7082cbea0e01b853c49d00b14e3555d14bf9b2162973fd7e2dd17f2b934\" failed" error="failed to set removing state for container \"fe89c7082cbea0e01b853c49d00b14e3555d14bf9b2162973fd7e2dd17f2b934\": container is already in removing state"
time="2023-10-30T08:28:42.229989662Z" level=info msg="RemoveContainer for \"fe89c7082cbea0e01b853c49d00b14e3555d14bf9b2162973fd7e2dd17f2b934\" returns successfully"
time="2023-10-30T08:28:54.520354616Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:28:55.248542924Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:28:55.248641451Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:28:55.248661315Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:28:55.248677076Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:29:19.911784610Z" level=info msg="StopPodSandbox for \"8f197a84f13267572f24848084d6dfebbed795904b409e29c3c16ff488d51593\""
time="2023-10-30T08:29:22.881289310Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\""
time="2023-10-30T08:29:23.051335173Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:29:23.053764121Z" level=info msg="stop pulling image docker.io/rancher/mirrored-calico-cni:v3.26.1: active requests=0, bytes read=4431"
time="2023-10-30T08:29:23.059240643Z" level=info msg="ImageUpdate event name:\"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:29:23.064170683Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni:v3.26.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:29:23.071233619Z" level=info msg="ImageUpdate event name:\"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
time="2023-10-30T08:29:23.073899404Z" level=info msg="Pulled image \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" with image id \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\", repo tag \"docker.io/rancher/mirrored-calico-cni:v3.26.1\", repo digest \"docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7\", size \"93375345\" in 192.54606ms"
time="2023-10-30T08:29:23.073968923Z" level=info msg="PullImage \"docker.io/rancher/mirrored-calico-cni:v3.26.1\" returns image reference \"sha256:9dee260ef7f5990aaf6e8f6767b767366c27a6abbf79ba8dba45ff3290bd5de0\""
time="2023-10-30T08:29:23.077203755Z" level=info msg="CreateContainer within sandbox \"6bbbb5f1cc52fcf4cfdc9a30b10841407063e2940d1f28c0267a5416c6829d71\" for container &ContainerMetadata{Name:install-cni,Attempt:93,}"
time="2023-10-30T08:29:23.111431243Z" level=info msg="CreateContainer within sandbox \"6bbbb5f1cc52fcf4cfdc9a30b10841407063e2940d1f28c0267a5416c6829d71\" for &ContainerMetadata{Name:install-cni,Attempt:93,} returns container id \"d6ce8febd2f20e562fa4f3de39cb9bce02645e52d85b8e13eefd038840830475\""
time="2023-10-30T08:29:23.112332213Z" level=info msg="StartContainer for \"d6ce8febd2f20e562fa4f3de39cb9bce02645e52d85b8e13eefd038840830475\""
time="2023-10-30T08:29:23.287154938Z" level=info msg="StartContainer for \"d6ce8febd2f20e562fa4f3de39cb9bce02645e52d85b8e13eefd038840830475\" returns successfully"
time="2023-10-30T08:29:25.290493828Z" level=error msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\" failed" error="failed to destroy network for sandbox \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.38.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.38.0.1:443: i/o timeout"
time="2023-10-30T08:29:25.320032783Z" level=info msg="StopPodSandbox for \"10ea410a874754dfc0635326052fb1857f64fc1f44373b99bbeab73e05d30ac6\""
time="2023-10-30T08:29:25.320145127Z" level=info msg="Container to stop \"3a51b223c2ad282e67e78a87cc616d961a3860308876a113cc30aa0106c49421\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:29:25.320166805Z" level=info msg="Container to stop \"aa2b1cd5f7b817f80e8b6b43a0bd2f7e96ec0ad13ce00d7bfb5c5777a46b6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
time="2023-10-30T08:29:25.320183481Z" level=info msg="Container to stop \"f0929ef526a002400ee9e59dd5fe95f7677e724842d02f8e20a2b88c61435d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\""

When that happens i get the big hammer:
sudo /var/lib/rancher/rke2/bin/crictl --runtime-endpoint unix:///run/k3s/containerd/containerd.sock pods -o yaml --label component=kube-proxy | grep '\bid\b' | awk -F': ' '{print $2}' | xargs -I {} sudo /var/lib/rancher/rke2/bin/crictl --runtime-endpoint unix:///run/k3s/containerd/containerd.sock rmp -f {}

and restart the service for the kube-proxy to come back.

@jhoelzel
Copy link

So i have been able to lock this down further:

  • i have weekly updates sheduled. Since im using MicroOS they are applied in the background and all thats lef is to restart the node, for which i used Kured.
  • Kured does in fact restart the node and the agent comes back without issues
  • however the kube-proxy remains in "failed" state:
sudo /var/lib/rancher/rke2/bin/kubectl --kubeconfig=/etc/rancher/rke2/rke2.yaml get pods -A -o json | jq 'select(.items != null) | .items[] | select(.metadata.name | contains("kube-proxy")) | select(.status.phase == "Failed" or .status.phase == "Error" or any(.status.containerStatuses[]; .state.terminated != null)) | {nodeName: .spec.nodeName, podName: .metadata.name, conditionMessage: (if .status.conditions? then .status.conditions[] | select(.message != null) | .message else "Not available" end)}'
{
  "nodeName": "nyc3-postgres-1-ujvng",
  "podName": "kube-proxy-nyc3-postgres-1-ujvng",
  "conditionMessage": "Pod was terminated in response to imminent node shutdown."
}
{
  "nodeName": "nyc3-system-2-vtkux",
  "podName": "kube-proxy-nyc3-system-2-vtkux",
  "conditionMessage": "Pod was terminated in response to imminent node shutdown."
}
{
  "nodeName": "nyc3-worker-0-h7alp",
  "podName": "kube-proxy-nyc3-worker-0-h7alp",
  "conditionMessage": "Pod was terminated in response to imminent node shutdown."
}

because the kube-proxy does not come back the rest of the system remains pending or restarting or crashing since they cant retrieve the kubernetes service, because its ip is managed by the proxy and henceforth nothing really works.

The uptime of the systems suggest they are properly restarted and all my nodes are also configured for a gracefull shutdown of 60 seconds.

Solutions so far:

  • delete the pod with force
  • delete the remnant with crictl
  • restart rke2-agent service
  • be happy.

@jhoelzel
Copy link

jhoelzel commented Nov 13, 2023

Sometimes this still doesnt work, and afterwards the kube-proxy remains pending:
`` Warning FailedMount 87s kubelet Unable to attach or mount volumes: unmounted volumes=[file3 file0 file1 file2], unattached volumes=[file3 file0 file1 file2]: timed out waiting for the condition ```

Also deleting the pod without force does not work:

sudo /var/lib/rancher/rke2/bin/kubectl --kubeconfig=/etc/rancher/rke2/rke2.yaml delete pod -n kube-system                kube-proxy-nyc3-postgres-1-ujvng  -v 6
I1113 11:50:46.230839   25828 loader.go:373] Config loaded from file:  /etc/rancher/rke2/rke2.yaml
I1113 11:50:46.250681   25828 round_trippers.go:553] DELETE https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-proxy-nyc3-postgres-1-ujvng 200 OK in 5 milliseconds
pod "kube-proxy-nyc3-postgres-1-ujvng" deleted
I1113 11:50:46.253334   25828 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/kube-proxy-nyc3-postgres-1-ujvng 200 OK in 2 milliseconds
I1113 11:50:46.254201   25828 reflector.go:205] Reflector from k8s.io/client-go/tools/watch/informerwatcher.go:146 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I1113 11:50:46.254233   25828 reflector.go:221] Starting reflector *unstructured.Unstructured (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I1113 11:50:46.254241   25828 reflector.go:257] Listing and watching *unstructured.Unstructured from k8s.io/client-go/tools/watch/informerwatcher.go:146
I1113 11:50:46.255587   25828 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods?fieldSelector=metadata.name%3Dkube-proxy-nyc3-postgres-1-ujvng&limit=500&resourceVersion=0 200 OK in 1 milliseconds
I1113 11:50:46.257832   25828 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-proxy-nyc3-postgres-1-ujvng&resourceVersion=6418295&timeoutSeconds=338&watch=true 200 OK in 0 milliseconds
I1113 11:50:46.355104   25828 shared_informer.go:300] caches populated

That is stuck until i cancel it. (and the pod is now not failed but stuck terminating)

When i do it with force the command runs but still its not able to actually acces the volumemounts on its local disk and will go back to pending.

at that point (which does not happen always) only force deleting the pod, deleting the container with crictl and actually restarting again seems to fix it. but there does not seem to be a reasonable way to script all of that except an extra operator :(

@stephanbertl
Copy link

stephanbertl commented Nov 14, 2023

We are experiencing the same issue, kube-proxy (v1.26.8-rke2r1-build20230824) stuck and only this will fix it (temporarily)

rm /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml
systemctl restart rke2-agent

Happens across different types of nodes, all running Ubuntu 22.04

Sometimes we see these events on the kube-proxy:

Events:
  Type     Reason          Age   From     Message
  ----     ------          ----  ----     -------
  Normal   SandboxChanged  25m   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          25m   kubelet  Container image "rancher/hardened-kubernetes:v1.25.13-rke2r1-build20230824" already present on machine
  Normal   Created         25m   kubelet  Created container kube-proxy
  Normal   Started         25m   kubelet  Started container kube-proxy
  Warning  FailedMount     22m   kubelet  Unable to attach or mount volumes: unmounted volumes=[file2 file3 file0 file1], unattached volumes=[file2 file3 file0 file1]: timed out waiting for the condition
  Normal   Killing         12m   kubelet  Stopping container kube-proxy

@brandond
Copy link
Contributor

@stephanbertl can you provide information on anything that might be unique about your environment? Being able to reproduce this on demand would be very helpful in isolating the root cause.

What is going on prior to kube-proxy getting stuck? How many cpus / how much memory do these nodes have? Is this happening only on agents, or also on servers?

@brandond
Copy link
Contributor

brandond commented Dec 15, 2023

I am curious to see if the fix from kubernetes/kubernetes#122211 resolves or improves this, as it related to handling mounts that are in an unknown state during startup - and that is specifically what the static pods are getting stuck on.

@brandond
Copy link
Contributor

brandond commented Dec 15, 2023

@FlorianFuss
From my understanding with the resolution of #3725 the kube-proxy often gets a new UID (.metadata.uid) in the static-pod manifest when the rke2-agent (re)starts.

It should only get a new UID when the configuration is changed, RKE2 is upgraded, or an etcd restore is done (on servers running etcd). The pod manifest should NOT change following a restart of the service or reboot of the node with no other change to state.

The UID change is necessary to update the mirror pod in the apiserver - if the UID does not change, the mirror pod displayed in the apiserver will not be updated. The kubelet SHOULD NOT be trying to run the pod based on the mirror pod spec. The desired state of the pod 100% comes from the static pod manifest, and the pod resource displayed in the apiserver is a read-only "mirror" of that.

I agree with you that there appears to be a timing element to this, that's what I was referring to in #4864 (comment)

@jhoelzel
Copy link

jhoelzel commented Dec 24, 2023

This is now every sunday to me. removing the manifest and rebooting works most of the time.
By now im thinking about a service that starts before the rke2 service to simply remove the kube-proxy manifest.

Overall i do believe though that the kube-proxy should be a daemonset again.

E:
i realized that you can not even automate this with kured as masters and agents have different manifests paths.
so using kured like this:

          rebootCommand: "rm -f /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml && /bin/systemctl reboot"

does not even work.

personally i blame: #3752 which is simply overcautious and wrong. Either have a daemonset or do proper cleanup. You are optimizing for the case that you need to restart rke2 without having run updates and you could easily provide a flag for that. the potential service disruption is also limited to maybe 30 seconds.

@brandond
Copy link
Contributor

brandond commented Dec 24, 2023

It looks like kubernetes/kubernetes#122211 won't be available until the January patch releases.

personally i blame: #3752 which is simply overcautious and wrong.

Thanks? I don't believe it's either. It should be possible to update the static pod manifest at any time. If the kubelet can be forced out of sync by writing the file at the wrong time, then there's a bug in the kubelet. We don't want random rke2 service restarts triggering static pod restarts if the config has not changed; our goal is to leave the components undisturbed to the largest extent possible.

@jhoelzel
Copy link

First of all if you feel personally attacked, that was not my intention and i very much value your work and can only aspire to contribute as much to this project as you do someday. I am only trying to voice my opinion to get around this problem as it literally haunts me for christmas.

My pods are not stuck in "terminating" they simply fail and dont come back. They are only stuck in terminating once i try to delete the failed pod through kubectl. The solution to delete the static manifest and then restart the service has worked for me without fail over the course of the last two days.

I believe this is two issues interacting with each other and that it should be expected behavior that when i restart rke2 that the static pod it provides do so as well as, and you may correct me here, the only usecase where the pods should not restart when the service restartds are minor configuration changes, which may occour yearly? Proper pod budgeting might even completly work around the possible downtime ( i have not tried that with static pods)

I dont believe that upstream has this problem in the same way, as the proxy is operated by a daemonset and therefore a pod with another container id shall simply be spawned thats that.

I do understand the decision making as to why rke2 uses a static pod and i actually do support ist because having different data directories for server and agent make it really obvious what im currently working on too, but then again the proxy should simply be recycled on restart.

I also dont think that the kubelet fails, because it does provision CSI and also networking daemonsets just fine. It just happens that though of course depend on the kube.proxy and therefore crashloop. If we would not be talking about running production clusters, i would simply switch to a kube-proxy less calico setup, but its actually best practise to use both still.

Or maybe i simply dont understand why there would be "random rke2 service restarts" at all if the do not point to a larger problem.

@jhoelzel
Copy link

jhoelzel commented Dec 26, 2023

Im trying now a new strategy using kured where i delete the static pod pre reboot, ill let you know how it works out for me next sunday:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kured
  namespace: argocd
  labels:
    app.kubernetes.io/managed-by: argocd
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  project: system
  source:
    repoURL: https://kubereboot.github.io/charts
    targetRevision: 5.2.0
    chart: kured
    helm:
      values: |
        maxUnavailable: 1
        configuration:
          rebootDays: [sun]
          rebootCommand: "rm -f /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml && /bin/systemctl reboot"
          pre-reboot-node-labels:
            -"kured=rebooting"
          post-reboot-node-labels:
            - "kured=done"
          #lockTtl: "20m"
          forceReboot: false
          period: "1h0m0s"
          concurrency: 1
          lockReleaseDelay: "10m"
          messageTemplateDrain: "⏳ Draining node %s"
          messageTemplateReboot: "♻️ Rebooted node %s"
          messageTemplateUncordon: "Node is back online"
          logFormat: "json"
          annotateNodes: true
        tolerations:
          - key: "workload"
            operator: "Equal"
            value: "system"
            effect: "NoSchedule"
          - key: "workload"
            operator: "Equal"
            value: "postgres"
            effect: "NoSchedule"
          - key: "workload"
            operator: "Equal"
            value: "system"
            effect: "NoSchedule"
  destination:
    server: https://kubernetes.default.svc
    namespace: kured
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - ServerSideApply=true

@Tejeev
Copy link

Tejeev commented Dec 28, 2023

So we've run into this in one of my customer's environments and my suggestion was to add a script via bootstrap that runs every few minutes (per cronjob) that will check if kube-proxy is running. If not, it would just run the config removal and agent restart:

rm /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml
systemctl restart rke2-agent

@jhoelzel I'm not sure I fully understand the problem, so please forgive me if this is a stupid suggestion, but could this work for you (or am I missing something obviously wrong with the idea that I should note to my customer?)

@stephanbertl
Copy link

To add some information: after updating RKE2 minor version from 1.26.8 to 1.26.11, all pods get updated except, kube-proxy pods on agent nodes (server node pods get updated). They stay in their old version. They also show this:

 Warning  FailedMount  2m19s  kubelet  Unable to attach or mount volumes: unmounted volumes=[file0 file1 file2 file3], unattached volumes=[file0 file1 file2 file3]: timed out waiting for the condition

The only fix is to manually remove the static pod manifest. Then the new image is used.

rm /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml
systemctl restart rke2-agent

@jhoelzel
Copy link

jhoelzel commented Jan 7, 2024

Im trying now a new strategy using kured where i delete the static pod pre reboot, ill let you know how it works out for me next sunday:

Soo that didnt work and i was left with a couple of nodes without kube-proxy running. which in the end was much worse..
Im thinking of scrapping the auto update alltogether for me now since the zypper packages for rke2 are excluded by default from refresh since a couple of versions i guess:
grafik

To conclude: i still have no idea why this would happen but yes removing the manifest and restarting the service fixes it. IMHO kube-proxy should be a daemonset and recover itself. If the master nodes need to be different, there is always the way of using 2 daemonsets.

@Martin-Weiss
Copy link

Martin-Weiss commented Jan 10, 2024

FYI - just ran into exactly this problem as well during the upgrade of RKE2 1.25.16 to 1.26.11.

This workaround helped for a moment:

rm /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml
systemctl restart rke2-agent

-> in my case this issue did not show up on the "rke2-server" side of things - just on the "rke2-agent" side (all agents)..

Would it make sense to add this issue to the list of known issues in RKE2 1.26?

@Tejeev
Copy link

Tejeev commented Jan 23, 2024

So just an update, the addition of the simple bash script that checks if kube-proxy is running and if not, runs the config removal and agent restart successfully worked around the problem and I've not been made aware of any negative repercussions

@jhoelzel
Copy link

jhoelzel commented Jan 23, 2024

So just an update, the addition of the simple bash script that checks if kube-proxy is running and if not, runs the config removal and agent restart successfully worked around the problem and I've not been made aware of any negative repercussions

I think its the best solution we have currently. my "kured" prexecution script only made it worse and I am back to manual upadtes for now. We will see, hopefully brandond is right and this will show up with then next patch release: kubernetes/kubernetes#122211

@dfaltum
Copy link

dfaltum commented Feb 28, 2024

Hi everyone!

I can reproduce the problem with v1.25.16+rke2r1 and v1.26.13+rke2r1. Workaround: rke2-killall and service restart.

@brandond brandond added this to the v1.29.4+rke2r1 milestone Mar 22, 2024
@brandond brandond changed the title kube-proxy is unable to start in 1.26.9+rke2r1 kube-proxy pod fails to run due to unmounted volumes Mar 22, 2024
@SebPlv
Copy link

SebPlv commented Apr 18, 2024

Hi,

We are also facing the problem when upgrading from v1.25.14 to v1.26.14.
But we have over 30 clusters with thousands of workloads, doing a "rke2-killall and service restart" cannot be a solution. As we have statefull workloads we need to drain each node before restarting the service.
As we are using Rancher to deploy/upgrade our clusters, could there be a solution managed by Rancher during the upgrade ?

@VestigeJ
Copy link
Contributor

VestigeJ commented Apr 18, 2024

Community please feel free to try and test these fixes (not in prod) as I am unable to try to reproduce the original errors. I can see though that the agent process is pulling and rewriting the kube-proxy.yaml on reboots but we already have this behavior on existing releases so it isn't net new.

Are you all able to divulge how many volumes you have mounted in these large clusters? Are you able to expand on how resource constrained the nodes may be? I'm trying to figure out where the bottle neck is exactly to reproduce but can't do so with my limited testing environments.

https://www.suse.com/fr-fr/support/kb/doc/?id=000021284

// existing behavior - I basically have followed the steps associated with getting the various PVs in a cluster (trying to orphan them...) then intentionally wrecking the kube-proxy.yaml then immediately rebooting.

//after the reboot
$ sudo cat /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml
cat: /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml: No such file or directory
$ sudo cat /var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-proxy
    tier: control-plane
  name: kube-proxy
  namespace: kube-system
  uid: 0237
spec:
  containers:
  - args:
    - --cluster-cidr=10.42.0.0/16
    - --conntrack-max-per-core=0
    - --conntrack-tcp-timeout-close-wait=0s
    - --conntrack-tcp-timeout-established=0s
    - --healthz-bind-address=127.0.0.1
    - --hostname-override=ip-1-1-1-54
    - --kubeconfig=/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
    - --proxy-mode=iptables
    command:
    - kube-proxy
    env:
    - name: FILE_HASH
      value: d12034e965e
    - name: NO_PROXY
      value: .svc,.cluster.local,10.42.0.0/16,10.43.0.0/16
    image: index.docker.io/rancher/hardened-kubernetes:v1.29.3-rke2r1-build20240315
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: localhost
        path: /healthz
        port: 10256
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-proxy
    resources:
      requests:
        cpu: 250m
        memory: 128Mi
    securityContext:
      privileged: true
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: localhost
        path: /healthz
        port: 10256
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 5
    volumeMounts:
    - mountPath: /var/lib/rancher/rke2/agent/client-kube-proxy.crt
      name: file0
      readOnly: true
    - mountPath: /var/lib/rancher/rke2/agent/client-kube-proxy.key
      name: file1
      readOnly: true
    - mountPath: /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
      name: file2
      readOnly: true
    - mountPath: /var/lib/rancher/rke2/agent/server-ca.crt
      name: file3
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /var/lib/rancher/rke2/agent/client-kube-proxy.crt
      type: File
    name: file0
  - hostPath:
      path: /var/lib/rancher/rke2/agent/client-kube-proxy.key
      type: File
    name: file1
  - hostPath:
      path: /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
      type: File
    name: file2
  - hostPath:
      path: /var/lib/rancher/rke2/agent/server-ca.crt
      type: File
    name: file3
status: {}

$ kg pv,pvc -o wide -A

NAME                             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS    VOLUMEATTRIBUTESCLASS   REASON   AGE     VOLUMEMODE
persistentvolume/block-pv        10Gi       RWO            Retain           Released    default/block-pvc                   <unset>                          5h9m    Block
persistentvolume/local-storage   1Gi        RWO            Recycle          Available                       local-storage   <unset>                          3h42m   Filesystem
persistentvolume/vol             3Gi        RWX            Retain           Available                       manual          <unset>                          3h43m   Filesystem

@jhoelzel
Copy link

jhoelzel commented Apr 19, 2024

Those of you who can, just use cilium in kube-proxy-less mode. I didnt really have any luck with kube-proxy-less calico, but cillium worked right out of the box.

@rancher-max
Copy link
Contributor

I'm going to close this out based on the fixes made and repro attempts, but we can reopen if it is still noticed after these releases: v1.29.4, v1.28.9, and v1.27.13.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests