Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flake:This VM is having trouble accessing https://registry.k8s.io #18905

Closed
nirs opened this issue May 15, 2024 · 7 comments
Closed

Flake:This VM is having trouble accessing https://registry.k8s.io #18905

nirs opened this issue May 15, 2024 · 7 comments

Comments

@nirs
Copy link
Contributor

nirs commented May 15, 2024

What Happened?

We have an issue staring minikube on one VM (vwmare based). On the same lab we have other vms (libvirt based) running minikube normally.

Simplified command reproducing the issue:

$ cat reproduce.sh 
minikube start \
    --driver kvm2 \
    --container-runtime containerd \
    --network default \
    --extra-config 'kubelet.serialize-image-pulls=false' \
    --alsologtostderr \
    -v4

Minikube fails with:

! This VM is having trouble accessing https://registry.k8s.io/
      * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
      X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1```

Checking access to registry.k8s.io shows that is it indeed not accessible via ipv6 but it accessible via ipv4.

# resolvectl query registry.k8s.io
registry.k8s.io: 2600:1901:0:bbc4::            -- link: ens32
                 34.96.108.209                 -- link: ens32

$ ping 2600:1901:0:bbc4::
PING 2600:1901:0:bbc4::(2600:1901:0:bbc4::) 56 data bytes
From 2620:52:0:4635::fe icmp_seq=1 Destination unreachable: No route
From 2620:52:0:4635::fe icmp_seq=2 Destination unreachable: No route
From 2620:52:0:4635::fe icmp_seq=3 Destination unreachable: No route
From 2620:52:0:4635::fe icmp_seq=4 Destination unreachable: No route
^C
--- 2600:1901:0:bbc4:: ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3004ms

$ ping 34.96.108.209
PING 34.96.108.209 (34.96.108.209) 56(84) bytes of data.
64 bytes from 34.96.108.209: icmp_seq=1 ttl=58 time=10.4 ms
64 bytes from 34.96.108.209: icmp_seq=2 ttl=58 time=10.2 ms
64 bytes from 34.96.108.209: icmp_seq=3 ttl=58 time=10.1 ms
^C
--- 34.96.108.209 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 10.107/10.232/10.418/0.134 ms

On another VM in the same lab - running exactly the same version of Fedora 39, I see the same connectivity issue with ipv6, but minikube runs fine.

Looks like an issue with the environment and now with minikube, but maybe someone have some insight on the root cause.

Attach the log file

logs.txt

Operating System

None

Driver

None

@nirs
Copy link
Contributor Author

nirs commented May 15, 2024

Looks like the OS/drive fields were lost while reposting after trimming the comment body (was longer than 65k):

  • OS: Fedora 39
  • Driver: kvm2

@medyagh
Copy link
Member

medyagh commented May 15, 2024

@nirs do you mind trying with latest version 1.33.1 we had a patch specifically for fedora DNS
and please ensure to delete the old minikube before trying newest one. (so it doesnt use the old iso)

@nnzv
Copy link

nnzv commented May 16, 2024

Tested the latest version of minikube to replicate the issue. Encountered no problems. Noted that I'm not using Fedora.

minikube start --driver=kvm2 --extra-config 'kubelet.serialize-image-pulls=false' --container-runtime containerd
😄  minikube v1.33.1 on Gentoo 2.15
✨  Using the kvm2 driver based on user configuration
💾  Downloading driver docker-machine-driver-kvm2:
    > docker-machine-driver-kvm2-...:  65 B / 65 B [---------] 100.00% ? p/s 0s
    > docker-machine-driver-kvm2-...:  13.46 MiB / 13.46 MiB  100.00% 2.25 MiB
💿  Downloading VM boot image ...
    > minikube-v1.33.1-amd64.iso....:  65 B / 65 B [---------] 100.00% ? p/s 0s
    > minikube-v1.33.1-amd64.iso:  314.16 MiB / 314.16 MiB  100.00% 1.44 MiB p/
👍  Starting "minikube" primary control-plane node in "minikube" cluster
💾  Downloading Kubernetes v1.30.0 preload ...
    > preloaded-images-k8s-v18-v1...:  375.69 MiB / 375.69 MiB  100.00% 2.51 Mi
🔥  Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
❗  This VM is having trouble accessing https://registry.k8s.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
📦  Preparing Kubernetes v1.30.0 on containerd 1.7.15 ...
    ▪ kubelet.serialize-image-pulls=false
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring Calico (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-ddf655445-zjmv4   1/1     Running   0          6m34s
calico-node-svph6                         1/1     Running   0          6m34s
coredns-7db6d8ff4d-tf9l2                  1/1     Running   0          6m34s
etcd-minikube                             1/1     Running   0          6m52s
kube-apiserver-minikube                   1/1     Running   0          6m50s
kube-controller-manager-minikube          1/1     Running   0          6m49s
kube-proxy-fprcq                          1/1     Running   0          6m34s
kube-scheduler-minikube                   1/1     Running   0          6m52s
storage-provisioner                       1/1     Running   0          6m43s

@nirs
Copy link
Contributor Author

nirs commented May 16, 2024

@nirs do you mind trying with latest version 1.33.1 we had a patch specifically for fedora DNS and please ensure to delete the old minikube before trying newest one. (so it doesnt use the old iso)

I cannot reproduce this now, the same VM works with both minikube 1.33.0 (with the workaround) and 1.33.1. So something was change outside of the VM that fixed the issue. Maybe DNS cache expired?

@medyagh
Copy link
Member

medyagh commented May 16, 2024

@nirs do you mind trying with latest version 1.33.1 we had a patch specifically for fedora DNS and please ensure to delete the old minikube before trying newest one. (so it doesnt use the old iso)

I cannot reproduce this now, the same VM works with both minikube 1.33.0 (with the workaround) and 1.33.1. So something was change outside of the VM that fixed the issue. Maybe DNS cache expired?

hm...thats interesting fluke ! do you mind sharing if you are using a Corp/Company laptop ? or under a proxy? or use a VPN or anything other than a standard machine ?

one thing that might have happened is you might be using the Old ISO after upgrading minikube to latest version and that one did not have the fix. and doing "minikube delete" might have fixed it

@medyagh medyagh changed the title Minikube fail to start on one VM: ! This VM is having trouble accessing https://registry.k8s.io Flake:This VM is having trouble accessing https://registry.k8s.io May 16, 2024
@nirs
Copy link
Contributor Author

nirs commented May 16, 2024

@nirs do you mind trying with latest version 1.33.1 we had a patch specifically for fedora DNS and please ensure to delete the old minikube before trying newest one. (so it doesnt use the old iso)

I cannot reproduce this now, the same VM works with both minikube 1.33.0 (with the workaround) and 1.33.1. So something was change outside of the VM that fixed the issue. Maybe DNS cache expired?

hm...thats interesting fluke ! do you mind sharing if you are using a Corp/Company laptop ? or under a proxy? or use a VPN or anything other than a standard machine ?

The environment is company lab inside a VPN. The vm with that problem is a vwmare
vm. I don't have much detail or any control on the vmware environment or networking
in this lab.

In the same lab we have other machines running libvirt vms, on these vms we did not
see any issue with minikube 1.33.0. All the vms were running the same Fedora 39
version.

one thing that might have happened is you might be using the Old ISO after upgrading minikube to latest version and that one did not have the fix. and doing "minikube delete" might have fixed it.

We test with clean minikube (delete ~/.minikube, install new version, start it once to download the iso and other stuff).

@medyagh
Copy link
Member

medyagh commented May 17, 2024

thanks for sharing the additional info, I suspect the corp VPN DNS changing or flaking could be the root cause of this

@medyagh medyagh closed this as completed May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants