You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I just upgraded rancher from 2.4.8 to 2.5.2.
The rancher installation is on docker, and the same node was the controlplane of a cluster of 2 nodes (1 controlplane, 1 worker), version 1.18.10 deployed by Rancher UI.
After the upgrade, the load of the node has became out of control, so I promoted the worker to controlplane, and tried to convert the first node to worker.
After multiple times, I'm still not able to have that node as a worker (I receive multiple errors about ephemeral storage, I don't know why).
So I removed it and I kept the cluster with only 1 node.
Now, in this node I have kubelet log flooded by these logs:
The issue seems to be related to #30045.
As suggested by @superseb on slack, I disabled the project network isolation and the log is gone. Also node load is now normal.
Hi, I just upgraded rancher from 2.4.8 to 2.5.2.
The rancher installation is on docker, and the same node was the controlplane of a cluster of 2 nodes (1 controlplane, 1 worker), version 1.18.10 deployed by Rancher UI.
After the upgrade, the load of the node has became out of control, so I promoted the worker to controlplane, and tried to convert the first node to worker.
After multiple times, I'm still not able to have that node as a worker (I receive multiple errors about ephemeral storage, I don't know why).
So I removed it and I kept the cluster with only 1 node.
Now, in this node I have kubelet log flooded by these logs:
At the same time, both kubelet and api-server use LOTS of resources of the node.
Any idea why this thing is happening and how to resolve?
The cluster is working, and nginx ingress too.
Thanks
The text was updated successfully, but these errors were encountered: