Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network Unreachable Error When Pulling calico/cni:v3.27.3 #18907

Closed
nnzv opened this issue May 16, 2024 · 3 comments
Closed

Network Unreachable Error When Pulling calico/cni:v3.27.3 #18907

nnzv opened this issue May 16, 2024 · 3 comments

Comments

@nnzv
Copy link

nnzv commented May 16, 2024

What Happened?

When attempting to start Minikube with the kvm2 driver and using the Calico CNI (--cni calico), there are consistent issues encountered during the image pulling process. The errors indicate a network unreachable problem when trying to pull the calico/cni:v3.27.3 image from Docker Hub.

To reproduce the issue, simply execute the command minikube start --driver=kvm2 --cni calico to initiate Minikube with the KVM2 driver and Calico CNI. Observe the process as Minikube attempts to initialize the Kubernetes cluster. The expected behavior is that Minikube should successfully initialize the Kubernetes cluster using the specified driver (kvm2) and CNI (calico), without encountering network issues during the image pulling process.

kubectl describe po -l=k8s-app=calico-node -n kube-system 
Name:                 calico-node-5hb54
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      calico-node
Node:                 minikube/192.168.39.126
Start Time:           Wed, 15 May 2024 21:02:46 -0500
Labels:               controller-revision-hash=7cdf66d5f5
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   192.168.39.126
IPs:
  IP:           192.168.39.126
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  
    Image:         docker.io/calico/cni:v3.27.3
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/calico-ipam
      -upgrade
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      KUBERNETES_NODE_NAME:        (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:  <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
    Mounts:
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/lib/cni/networks from host-local-net-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jqs2s (ro)
  install-cni:
    Container ID:  
    Image:         docker.io/calico/cni:v3.27.3
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/cni/bin/install
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
      CNI_MTU:               <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      SLEEP:                 false
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jqs2s (ro)
  mount-bpffs:
    Container ID:  
    Image:         docker.io/calico/node:v3.27.3
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      calico-node
      -init
      -best-effort
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /nodeproc from nodeproc (ro)
      /sys/fs from sys-fs (rw)
      /var/run/calico from var-run-calico (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jqs2s (ro)
Containers:
  calico-node:
    Container ID:   
    Image:          docker.io/calico/node:v3.27.3
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
    Liveness:   exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=10s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=10s period=10s #success=1 #failure=3
    Environment Variables from:
      kubernetes-services-endpoint  ConfigMap  Optional: true
    Environment:
      DATASTORE_TYPE:                     kubernetes
      WAIT_FOR_DATASTORE:                 true
      NODENAME:                            (v1:spec.nodeName)
      CALICO_NETWORKING_BACKEND:          <set to the key 'calico_backend' of config map 'calico-config'>  Optional: false
      CLUSTER_TYPE:                       k8s,bgp
      IP:                                 autodetect
      CALICO_IPV4POOL_IPIP:               Always
      CALICO_IPV4POOL_VXLAN:              Never
      CALICO_IPV6POOL_VXLAN:              Never
      FELIX_IPINIPMTU:                    <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_VXLANMTU:                     <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      FELIX_WIREGUARDMTU:                 <set to the key 'veth_mtu' of config map 'calico-config'>  Optional: false
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  false
      FELIX_HEALTHENABLED:                true
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/bpf from bpffs (rw)
      /var/lib/calico from var-lib-calico (rw)
      /var/log/calico/cni from cni-log-dir (ro)
      /var/run/calico from var-run-calico (rw)
      /var/run/nodeagent from policysync (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jqs2s (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 False 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  var-run-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/calico
    HostPathType:  
  var-lib-calico:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/calico
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  sys-fs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/
    HostPathType:  DirectoryOrCreate
  bpffs:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/bpf
    HostPathType:  Directory
  nodeproc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  
  cni-bin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
  cni-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  cni-log-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/calico/cni
    HostPathType:  
  host-local-net-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/networks
    HostPathType:  
  policysync:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/nodeagent
    HostPathType:  DirectoryOrCreate
  kube-api-access-jqs2s:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             CriticalAddonsOnly op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Normal   Scheduled         12m                   default-scheduler  Successfully assigned kube-system/calico-node-5hb54 to minikube
  Warning  Failed            12m (x2 over 12m)     kubelet            Failed to pull image "docker.io/calico/cni:v3.27.3": Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 8.8.4.4:53: dial udp 8.8.4.4:53: connect: network is unreachable
  Warning  DNSConfigForming  12m (x5 over 12m)     kubelet            Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 8.8.4.4
  Normal   Pulling           12m (x3 over 12m)     kubelet            Pulling image "docker.io/calico/cni:v3.27.3"
  Warning  Failed            12m (x3 over 12m)     kubelet            Error: ErrImagePull
  Warning  Failed            12m                   kubelet            Failed to pull image "docker.io/calico/cni:v3.27.3": Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 1.0.0.1:53: dial udp 1.0.0.1:53: connect: network is unreachable
  Normal   BackOff           11m (x4 over 12m)     kubelet            Back-off pulling image "docker.io/calico/cni:v3.27.3"
  Warning  Failed            11m (x4 over 12m)     kubelet            Error: ImagePullBackOff
  Warning  DNSConfigForming  2m36s (x45 over 12m)  kubelet            Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1

However, during the initialization process, Minikube fails to pull the calico/cni:v3.27.3 image due to a network unreachable error. This failure prevents the successful startup of the Kubernetes cluster. This issue seems to be specific to the combination of the kvm2 driver and the Calico CNI, as it has not been observed with other configurations. Other CNIs might not exhibit this behavior, but further testing is required to confirm.

Attach the log file

log.txt

Operating System

NAME=Gentoo
ID=gentoo
PRETTY_NAME="Gentoo Linux"
ANSI_COLOR="1;32"
HOME_URL="https://www.gentoo.org/"
SUPPORT_URL="https://www.gentoo.org/support/"
BUG_REPORT_URL="https://bugs.gentoo.org/"
VERSION_ID="2.15"

Driver

Linux Kernel 6.7.0
QEMU emulator version 8.2.2
Copyright (c) 2003-2023 Fabrice Bellard and the QEMU Project developers
libvirtd (libvirt) 10.1.0
@nnzv
Copy link
Author

nnzv commented May 16, 2024

Also, please take a look at the /etc/resolv.conf file.

ssh -i $(minikube ssh-key) docker@$(minikube ip) "cat /etc/resolv.conf"
# This is /run/systemd/resolve/resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 1.1.1.1
nameserver 8.8.8.8
nameserver 1.0.0.1
# Too many DNS servers configured, the following entries may be ignored.
nameserver 8.8.4.4
nameserver 2606:4700:4700::1111
nameserver 2001:4860:4860::8888
nameserver 2606:4700:4700::1001
nameserver 2001:4860:4860::8844
search .

@nnzv
Copy link
Author

nnzv commented May 16, 2024

Fixed it in a weird way by suggesting a fix in another issue #18905 (comment). Turns out, using containerd did the trick. Didn't realize that earlier, assumed it was already the default, not Docker. Closing the issue now.

/close

@k8s-ci-robot
Copy link
Contributor

@nnzv: Closing this issue.

In response to this:

Fixed it in a weird way by suggesting a fix in another issue #18905 (comment). Turns out, using containerd did the trick. Didn't realize that earlier, assumed it was already the default, not Docker. Closing the issue now.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants