New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
calico pods report an error of no route to host
#8764
Comments
no route to host
no route to host
i have same question Warning Unhealthy 113m (x7 over 114m) kubelet Readiness probe failed: Error initializing datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: no route to host |
When I shut down the firewall, the error disappeared, but I need to use it while the firewall is running |
@willzhang do you use calico VPP? |
no, just calico ipip with helm install. |
@willzhang could you provide any logs from the failing pods? Why they are failing? It does not seem obvious why irq affinity would have such an effect, but perhaps some misconfiguration of network devices? Are you using some overlay? Are queues on the overlay assigned properly? I think |
Expected Behavior
When configure the
CPU irqaffinity
in/etc/default/grub
,the calico pods to run normally.Current Behavior
When configure the
CPU irqaffinity
in/etc/default/grub
,the calico pods crashloopback and report an error ofno route to host
.what changed and calico apiserver logs
calico kube-controller logs
Possible Solution
Removing the kernel parameter
CPU irqaffinity
, calico will restore normal operation, but we need this parameter for CPU isolation to improve performance.Steps to Reproduce (for bugs)
irqaffinity=0,10
kernel options.no route to host
and i also use reservedSystemCPUs in kubelet config for system progress
Context
I need to isolate a portion of the exclusive CPU for the VPP application, so I use irqaffinity to concentrate CPU interrupts on other CPUs, eg 0 10.
Your Environment
v3.26.1
, install use helm with calico operator.v1.25.11
, containterd,just one master node.ubuntu 22.04
The text was updated successfully, but these errors were encountered: