Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This container is having trouble accessing https://k8s.gcr.io #9798

Closed
OkayJosh opened this issue Nov 29, 2020 · 52 comments
Closed

This container is having trouble accessing https://k8s.gcr.io #9798

OkayJosh opened this issue Nov 29, 2020 · 52 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code

Comments

@OkayJosh
Copy link

OkayJosh commented Nov 29, 2020

Steps to reproduce the issue:

  1. minikube start --driver=docker

Full output of failed command:

Full output of minikube start command used, if not already included:

Deleting "minikube" in docker ...
🔥 Deleting container "minikube" ...
🔥 Removing /home/cloudsigma/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
[cloudsigma@Fedora-32 django]$ minikube start --driver=docker --image-repository=auto
😄 minikube v1.15.1 on Fedora 32
✨ Using the docker driver based on user configuration
✅ Using image repository
👍 Starting control plane node minikube in cluster minikube
🔥 Creating docker container (CPUs=2, Memory=2200MB) ...
❗ This container is having trouble accessing https://k8s.gcr.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Sun 2020-11-29 09:30:46 UTC, end at Sun 2020-11-29 09:37:02 UTC. -- Nov 29 09:30:46 minikube systemd[1]: Starting Docker Application Container Engine... Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.338566925Z" level=info msg="Starting up" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.339932857Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.339959716Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.339976556Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.339995703Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.348324553Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.348463779Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.348539167Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.348548228Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.365955417Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.384459784Z" level=warning msg="Your kernel does not support cgroup rt period" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.384558438Z" level=warning msg="Your kernel does not support cgroup rt runtime" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.384615570Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.384667184Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.384877950Z" level=info msg="Loading containers: start." Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.469200751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.506675387Z" level=info msg="Loading containers: done." Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.520682027Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13 Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.520835403Z" level=info msg="Daemon has completed initialization" Nov 29 09:30:46 minikube dockerd[180]: time="2020-11-29T09:30:46.548158008Z" level=info msg="API listen on /run/docker.sock" Nov 29 09:30:46 minikube systemd[1]: Started Docker Application Container Engine. Nov 29 09:30:48 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Nov 29 09:30:48 minikube systemd[1]: Stopping Docker Application Container Engine... Nov 29 09:30:48 minikube dockerd[180]: time="2020-11-29T09:30:48.961096281Z" level=info msg="Processing signal 'terminated'" Nov 29 09:30:48 minikube dockerd[180]: time="2020-11-29T09:30:48.962680319Z" level=info msg="Daemon shutdown complete" Nov 29 09:30:48 minikube systemd[1]: docker.service: Succeeded. Nov 29 09:30:48 minikube systemd[1]: Stopped Docker Application Container Engine. Nov 29 09:30:48 minikube systemd[1]: Starting Docker Application Container Engine... Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.014400568Z" level=info msg="Starting up" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.016256603Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.016287167Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.016304428Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.016316195Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.020699031Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.020808482Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.020879195Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.020941302Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.056812347Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.062913662Z" level=warning msg="Your kernel does not support cgroup rt period" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.063008264Z" level=warning msg="Your kernel does not support cgroup rt runtime" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.063084137Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.063137926Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.063332836Z" level=info msg="Loading containers: start." Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.181413651Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.253328047Z" level=info msg="Loading containers: done." Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.266634246Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13 Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.266701024Z" level=info msg="Daemon has completed initialization" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.281139385Z" level=info msg="API listen on /var/run/docker.sock" Nov 29 09:30:49 minikube dockerd[419]: time="2020-11-29T09:30:49.281198634Z" level=info msg="API listen on [::]:2376" Nov 29 09:30:49 minikube systemd[1]: Started Docker Application Container Engine.

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
c814c6f73d688 bad58561c4be7 5 minutes ago Running storage-provisioner 0 264237d87eafc
f8a59f80375ce bfe3a36ebd252 5 minutes ago Running coredns 0 993fa3c98bacf
fc55aa1fcd37a 635b36f4d89f0 5 minutes ago Running kube-proxy 0 384c9ebc92709
ec995506eee9a 0369cf4303ffd 5 minutes ago Running etcd 0 554b138fb4d5b
115eec88b6801 14cd22f7abe78 5 minutes ago Running kube-scheduler 0 c5f649e41e7b1
3c477bc86e4c7 4830ab6185860 5 minutes ago Running kube-controller-manager 0 a71a49519e607
92c80200590e3 b15c6247777d7 5 minutes ago Running kube-apiserver 0 93c49fa40df0e

==> coredns [f8a59f80375c] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:40272->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:54760->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:55188->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:49012->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:46902->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:57478->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:36878->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:37883->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:54437->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 8526254777489433895.4813552006962660381. HINFO: read udp 172.17.0.2:56933->192.168.49.1:53: i/o timeout

==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=23f40a012abb52eff365ff99a709501a61ac5876
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_11_29T09_31_19_0700
minikube.k8s.io/version=v1.15.1
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 29 Nov 2020 09:31:16 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Sun, 29 Nov 2020 09:37:00 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Sun, 29 Nov 2020 09:36:31 +0000 Sun, 29 Nov 2020 09:31:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 29 Nov 2020 09:36:31 +0000 Sun, 29 Nov 2020 09:31:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 29 Nov 2020 09:36:31 +0000 Sun, 29 Nov 2020 09:31:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 29 Nov 2020 09:36:31 +0000 Sun, 29 Nov 2020 09:31:30 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
Capacity:
cpu: 4
ephemeral-storage: 82510724Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8144724Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 82510724Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8144724Ki
pods: 110
System Info:
Machine ID: dc0441139eae465ead0805eb541bcb4e
System UUID: 6312edaf-2a2f-4b32-80d7-375aec2b4544
Boot ID: 9963988f-9421-4695-839b-f196bd1bacf6
Kernel Version: 5.8.18-200.fc32.x86_64
OS Image: Ubuntu 20.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.13
Kubelet Version: v1.19.4
Kube-Proxy Version: v1.19.4
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


kube-system coredns-f9fd979d6-rw8s8 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 5m37s
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m42s
kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 5m42s
kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 5m42s
kube-system kube-proxy-zp8wj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m37s
kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 5m42s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m41s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 650m (16%) 0 (0%)
memory 70Mi (0%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal NodeHasSufficientMemory 5m54s (x4 over 5m54s) kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m54s (x5 over 5m54s) kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m54s (x4 over 5m54s) kubelet Node minikube status is now: NodeHasSufficientPID
Normal Starting 5m43s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m42s kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m42s kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m42s kubelet Node minikube status is now: NodeHasSufficientPID
Normal NodeNotReady 5m42s kubelet Node minikube status is now: NodeNotReady
Normal NodeAllocatableEnforced 5m42s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m36s kube-proxy Starting kube-proxy.
Normal NodeReady 5m32s kubelet Node minikube status is now: NodeReady

==> dmesg <==
[Nov29 05:12] #2
[ +0.000233] #3
[ +0.178024] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.826931] kauditd_printk_skb: 32 callbacks suppressed
[ +0.811058] systemd-journald[458]: File /var/log/journal/37b77124313f41d6af88b51d4456a3ad/system.journal corrupted or uncleanly shut down, renaming and replacing.
[Nov29 05:15] process 'docker/tmp/qemu-check567254077/check' started with executable stack
[ +3.284960] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Nov29 06:51] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.094786] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.004259] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.001534] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.033649] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.003482] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.029681] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000002] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.097332] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000012] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.

==> etcd [ec995506eee9] <==
raft2020/11/29 09:31:09 INFO: aec36adc501070cc became follower at term 0
raft2020/11/29 09:31:09 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/11/29 09:31:09 INFO: aec36adc501070cc became follower at term 1
raft2020/11/29 09:31:09 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
2020-11-29 09:31:10.279028 W | auth: simple token is not cryptographically signed
2020-11-29 09:31:10.358500 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
2020-11-29 09:31:10.365819 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-11-29 09:31:10.368367 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-11-29 09:31:10.368720 I | embed: listening for metrics on http://127.0.0.1:2381
raft2020/11/29 09:31:10 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
2020-11-29 09:31:10.368838 I | embed: listening for peers on 192.168.49.2:2380
2020-11-29 09:31:10.369264 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
raft2020/11/29 09:31:11 INFO: aec36adc501070cc is starting a new election at term 1
raft2020/11/29 09:31:11 INFO: aec36adc501070cc became candidate at term 2
raft2020/11/29 09:31:11 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
raft2020/11/29 09:31:11 INFO: aec36adc501070cc became leader at term 2
raft2020/11/29 09:31:11 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
2020-11-29 09:31:11.187516 I | etcdserver: setting up the initial cluster version to 3.4
2020-11-29 09:31:11.187945 N | etcdserver/membership: set the initial cluster version to 3.4
2020-11-29 09:31:11.188229 I | etcdserver/api: enabled capabilities for version 3.4
2020-11-29 09:31:11.188318 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
2020-11-29 09:31:11.188386 I | embed: ready to serve client requests
2020-11-29 09:31:11.191695 I | embed: serving client requests on 127.0.0.1:2379
2020-11-29 09:31:11.192827 I | embed: ready to serve client requests
2020-11-29 09:31:11.194273 I | embed: serving client requests on 192.168.49.2:2379
2020-11-29 09:31:26.281810 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:31:30.789535 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:31:40.789475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:31:50.789580 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:32:00.789651 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:32:10.789583 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:32:20.789624 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:32:30.789578 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:32:40.789653 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:32:50.789532 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:33:00.793033 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:33:10.789532 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:33:20.789522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:33:30.790161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:33:40.789495 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:33:50.789486 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:34:00.789480 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:34:10.789559 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:34:20.789491 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:34:30.789495 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:34:40.789446 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:34:50.789578 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:35:00.789731 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:35:10.789502 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:35:20.789541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:35:30.789491 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:35:40.789582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:35:50.789472 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:36:00.789501 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:36:10.789428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:36:20.789493 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:36:30.789577 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:36:40.789725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:36:50.789452 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-29 09:37:00.789452 I | etcdserver/api/etcdhttp: /health OK (status code 200)

==> kernel <==
09:37:02 up 4:24, 0 users, load average: 0.94, 1.04, 1.08
Linux minikube 5.8.18-200.fc32.x86_64 #1 SMP Mon Nov 2 19:49:11 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.1 LTS"

==> kube-apiserver [92c80200590e] <==
E1129 09:31:16.261585 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg:
I1129 09:31:16.298019 1 controller.go:86] Starting OpenAPI controller
I1129 09:31:16.350781 1 cache.go:39] Caches are synced for autoregister controller
I1129 09:31:16.351287 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1129 09:31:16.357626 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1129 09:31:16.359091 1 shared_informer.go:247] Caches are synced for crd-autoregister
I1129 09:31:16.360952 1 naming_controller.go:291] Starting NamingConditionController
I1129 09:31:16.361161 1 establishing_controller.go:76] Starting EstablishingController
I1129 09:31:16.361269 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I1129 09:31:16.361451 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1129 09:31:16.361558 1 crd_finalizer.go:266] Starting CRDFinalizer
I1129 09:31:16.373891 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I1129 09:31:16.393178 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1129 09:31:16.395096 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1129 09:31:17.249723 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1129 09:31:17.249920 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1129 09:31:17.254899 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I1129 09:31:17.258275 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I1129 09:31:17.258291 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I1129 09:31:17.666700 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1129 09:31:17.692469 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1129 09:31:17.866532 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1129 09:31:17.867288 1 controller.go:606] quota admission added evaluator for: endpoints
I1129 09:31:17.870648 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1129 09:31:18.814159 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1129 09:31:19.259225 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1129 09:31:19.449693 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1129 09:31:19.970082 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1129 09:31:25.869040 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1129 09:31:25.882826 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1129 09:31:42.385189 1 client.go:360] parsed scheme: "passthrough"
I1129 09:31:42.385323 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1129 09:31:42.385392 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1129 09:32:21.216485 1 client.go:360] parsed scheme: "passthrough"
I1129 09:32:21.216528 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1129 09:32:21.216536 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1129 09:32:58.164659 1 client.go:360] parsed scheme: "passthrough"
I1129 09:32:58.164737 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1129 09:32:58.164749 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1129 09:33:31.884535 1 client.go:360] parsed scheme: "passthrough"
I1129 09:33:31.884575 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1129 09:33:31.884586 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1129 09:34:14.105786 1 client.go:360] parsed scheme: "passthrough"
I1129 09:34:14.105869 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1129 09:34:14.105886 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1129 09:34:48.110866 1 client.go:360] parsed scheme: "passthrough"
I1129 09:34:48.110913 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1129 09:34:48.110921 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1129 09:35:19.199526 1 client.go:360] parsed scheme: "passthrough"
I1129 09:35:19.199573 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1129 09:35:19.199582 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1129 09:35:54.577610 1 client.go:360] parsed scheme: "passthrough"
I1129 09:35:54.577650 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1129 09:35:54.577658 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1129 09:36:25.173891 1 client.go:360] parsed scheme: "passthrough"
I1129 09:36:25.173999 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1129 09:36:25.174020 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1129 09:37:02.466572 1 client.go:360] parsed scheme: "passthrough"
I1129 09:37:02.466638 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1129 09:37:02.466649 1 clientconn.go:948] ClientConn switching balancer to "pick_first"

==> kube-controller-manager [3c477bc86e4c] <==
I1129 09:31:25.010749 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
I1129 09:31:25.260992 1 controllermanager.go:549] Started "endpoint"
I1129 09:31:25.261045 1 endpoints_controller.go:184] Starting endpoint controller
I1129 09:31:25.261050 1 shared_informer.go:240] Waiting for caches to sync for endpoint
I1129 09:31:25.511387 1 controllermanager.go:549] Started "serviceaccount"
I1129 09:31:25.511435 1 serviceaccounts_controller.go:117] Starting service account controller
I1129 09:31:25.511441 1 shared_informer.go:240] Waiting for caches to sync for service account
I1129 09:31:25.610868 1 request.go:645] Throttling request took 1.048847873s, request: GET:https://192.168.49.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
I1129 09:31:25.760962 1 controllermanager.go:549] Started "statefulset"
I1129 09:31:25.760988 1 stateful_set.go:146] Starting stateful set controller
I1129 09:31:25.761267 1 shared_informer.go:240] Waiting for caches to sync for stateful set
W1129 09:31:25.777553 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1129 09:31:25.810888 1 shared_informer.go:247] Caches are synced for GC
I1129 09:31:25.811220 1 shared_informer.go:247] Caches are synced for PVC protection
I1129 09:31:25.811556 1 shared_informer.go:247] Caches are synced for service account
I1129 09:31:25.811714 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I1129 09:31:25.812181 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I1129 09:31:25.812184 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I1129 09:31:25.812353 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I1129 09:31:25.816705 1 shared_informer.go:247] Caches are synced for namespace
I1129 09:31:25.817568 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I1129 09:31:25.838231 1 shared_informer.go:247] Caches are synced for job
I1129 09:31:25.849247 1 shared_informer.go:247] Caches are synced for HPA
I1129 09:31:25.850270 1 shared_informer.go:247] Caches are synced for ReplicaSet
I1129 09:31:25.860991 1 shared_informer.go:247] Caches are synced for taint
I1129 09:31:25.861307 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W1129 09:31:25.861797 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1129 09:31:25.861519 1 shared_informer.go:247] Caches are synced for TTL
I1129 09:31:25.861204 1 shared_informer.go:247] Caches are synced for ReplicationController
I1129 09:31:25.861137 1 shared_informer.go:247] Caches are synced for endpoint
I1129 09:31:25.861522 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I1129 09:31:25.861531 1 taint_manager.go:187] Starting NoExecuteTaintManager
I1129 09:31:25.861539 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I1129 09:31:25.861451 1 shared_informer.go:247] Caches are synced for stateful set
I1129 09:31:25.863244 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1129 09:31:25.864401 1 shared_informer.go:247] Caches are synced for deployment
I1129 09:31:25.873404 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1"
I1129 09:31:25.879554 1 shared_informer.go:247] Caches are synced for daemon sets
I1129 09:31:25.885679 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-rw8s8"
I1129 09:31:25.908141 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zp8wj"
I1129 09:31:25.962215 1 shared_informer.go:247] Caches are synced for disruption
I1129 09:31:25.962234 1 disruption.go:339] Sending events to api server.
E1129 09:31:25.963997 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"598fe9a5-38fd-45a2-a979-90b0f50610af", ResourceVersion:"217", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63742239079, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001941dc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001941de0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001941e00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0010815c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001941e20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001941e40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.4", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001941e80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001958960), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a29cf8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005c4cb0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000176ce0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000a29d78)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I1129 09:31:25.964391 1 shared_informer.go:247] Caches are synced for resource quota
I1129 09:31:25.974046 1 shared_informer.go:247] Caches are synced for expand
I1129 09:31:25.983385 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I1129 09:31:26.011371 1 shared_informer.go:247] Caches are synced for endpoint_slice
I1129 09:31:26.050264 1 shared_informer.go:247] Caches are synced for PV protection
I1129 09:31:26.061260 1 shared_informer.go:247] Caches are synced for persistent volume
I1129 09:31:26.061408 1 shared_informer.go:247] Caches are synced for attach detach
I1129 09:31:26.110955 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I1129 09:31:26.114544 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
E1129 09:31:26.153899 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E1129 09:31:26.154480 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I1129 09:31:26.317094 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I1129 09:31:26.317120 1 shared_informer.go:247] Caches are synced for resource quota
I1129 09:31:26.411030 1 shared_informer.go:247] Caches are synced for garbage collector
I1129 09:31:26.411087 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1129 09:31:26.415603 1 shared_informer.go:247] Caches are synced for garbage collector
I1129 09:31:30.891697 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.

==> kube-proxy [fc55aa1fcd37] <==
I1129 09:31:26.751639 1 node.go:136] Successfully retrieved node IP: 192.168.49.2
I1129 09:31:26.751870 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
W1129 09:31:26.785840 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
I1129 09:31:26.785961 1 server_others.go:186] Using iptables Proxier.
W1129 09:31:26.785970 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I1129 09:31:26.785974 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I1129 09:31:26.786167 1 server.go:650] Version: v1.19.4
I1129 09:31:26.786454 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1129 09:31:26.787185 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1129 09:31:26.787234 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1129 09:31:26.787394 1 config.go:315] Starting service config controller
I1129 09:31:26.787407 1 shared_informer.go:240] Waiting for caches to sync for service config
I1129 09:31:26.787425 1 config.go:224] Starting endpoint slice config controller
I1129 09:31:26.787428 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1129 09:31:26.887511 1 shared_informer.go:247] Caches are synced for endpoint slice config
I1129 09:31:26.887719 1 shared_informer.go:247] Caches are synced for service config

==> kube-scheduler [115eec88b680] <==
I1129 09:31:09.866107 1 registry.go:173] Registering SelectorSpread plugin
I1129 09:31:09.869803 1 registry.go:173] Registering SelectorSpread plugin
I1129 09:31:11.489373 1 serving.go:331] Generated self-signed cert in-memory
W1129 09:31:16.455300 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1129 09:31:16.455343 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1129 09:31:16.455354 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
W1129 09:31:16.455361 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1129 09:31:16.483727 1 registry.go:173] Registering SelectorSpread plugin
I1129 09:31:16.483781 1 registry.go:173] Registering SelectorSpread plugin
I1129 09:31:16.486764 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1129 09:31:16.486981 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E1129 09:31:16.488022 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I1129 09:31:16.488375 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1129 09:31:16.488472 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1129 09:31:16.492050 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1129 09:31:16.492273 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1129 09:31:16.492526 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1129 09:31:16.492628 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1129 09:31:16.492714 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1129 09:31:16.492855 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1129 09:31:16.492963 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1129 09:31:16.493039 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1129 09:31:16.493124 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1129 09:31:16.493224 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1129 09:31:16.493287 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1129 09:31:16.493354 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1129 09:31:17.398500 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1129 09:31:17.448636 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1129 09:31:17.448893 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I1129 09:31:18.087180 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file

==> kubelet <==
-- Logs begin at Sun 2020-11-29 09:30:46 UTC, end at Sun 2020-11-29 09:37:02 UTC. --
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.932680 2131 kuberuntime_manager.go:214] Container runtime docker initialized, version: 19.03.13, apiVersion: 1.40.0
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.933050 2131 server.go:1147] Started kubelet
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.933188 2131 server.go:152] Starting to listen on 0.0.0.0:10250
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.933992 2131 server.go:424] Adding debug handlers to kubelet server.
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.934878 2131 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.936123 2131 volume_manager.go:265] Starting Kubelet Volume Manager
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.950488 2131 desired_state_of_world_populator.go:139] Desired state populator starts to run
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.988264 2131 status_manager.go:158] Starting to sync pod status with apiserver
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.988324 2131 kubelet.go:1741] Starting kubelet main sync loop.
Nov 29 09:31:19 minikube kubelet[2131]: E1129 09:31:19.988390 2131 kubelet.go:1765] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.994118 2131 client.go:87] parsed scheme: "unix"
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.994265 2131 client.go:87] scheme "unix" not registered, fallback to default scheme
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.994376 2131 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }
Nov 29 09:31:19 minikube kubelet[2131]: I1129 09:31:19.994439 2131 clientconn.go:948] ClientConn switching balancer to "pick_first"
Nov 29 09:31:20 minikube kubelet[2131]: E1129 09:31:20.088734 2131 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.149431 2131 kubelet_node_status.go:70] Attempting to register node minikube
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.159837 2131 kubelet_node_status.go:108] Node minikube was previously registered
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.159918 2131 kubelet_node_status.go:73] Successfully registered node minikube
Nov 29 09:31:20 minikube kubelet[2131]: E1129 09:31:20.289134 2131 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.392263 2131 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-11-29 09:31:20.392233412 +0000 UTC m=+1.191927022 LastTransitionTime:2020-11-29 09:31:20.392233412 +0000 UTC m=+1.191927022 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659069 2131 cpu_manager.go:184] [cpumanager] starting with none policy
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659088 2131 cpu_manager.go:185] [cpumanager] reconciling every 10s
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659107 2131 state_mem.go:36] [cpumanager] initializing new in-memory state store
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659235 2131 state_mem.go:88] [cpumanager] updated default cpuset: ""
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659248 2131 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.659263 2131 policy_none.go:43] [cpumanager] none policy: Start
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.660728 2131 plugin_manager.go:114] Starting Kubelet Plugin Manager
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.690839 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.692636 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.694358 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.695508 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.756844 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/e30eb1a2f7c2dbcda239c972918b3eb4-ca-certs") pod "kube-apiserver-minikube" (UID: "e30eb1a2f7c2dbcda239c972918b3eb4")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757026 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/e30eb1a2f7c2dbcda239c972918b3eb4-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "e30eb1a2f7c2dbcda239c972918b3eb4")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757157 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/e30eb1a2f7c2dbcda239c972918b3eb4-k8s-certs") pod "kube-apiserver-minikube" (UID: "e30eb1a2f7c2dbcda239c972918b3eb4")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757270 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/e30eb1a2f7c2dbcda239c972918b3eb4-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "e30eb1a2f7c2dbcda239c972918b3eb4")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757410 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757558 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-k8s-certs") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757694 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-kubeconfig") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757806 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.757916 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/e30eb1a2f7c2dbcda239c972918b3eb4-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "e30eb1a2f7c2dbcda239c972918b3eb4")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758049 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758151 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d186e6390814d4dd7e770f47c08e98a2-etcd-data") pod "etcd-minikube" (UID: "d186e6390814d4dd7e770f47c08e98a2")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758239 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-ca-certs") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758332 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/627d9013c9c4b1cbfb72b4c0ef6cd100-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "627d9013c9c4b1cbfb72b4c0ef6cd100")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758418 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/38744c90661b22e9ae232b0452c54538-kubeconfig") pod "kube-scheduler-minikube" (UID: "38744c90661b22e9ae232b0452c54538")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758539 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d186e6390814d4dd7e770f47c08e98a2-etcd-certs") pod "etcd-minikube" (UID: "d186e6390814d4dd7e770f47c08e98a2")
Nov 29 09:31:20 minikube kubelet[2131]: I1129 09:31:20.758632 2131 reconciler.go:157] Reconciler: start to sync state
Nov 29 09:31:25 minikube kubelet[2131]: I1129 09:31:25.961121 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 29 09:31:25 minikube kubelet[2131]: I1129 09:31:25.972379 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b0006185-2ba4-4684-9499-1e40147c1724-lib-modules") pod "kube-proxy-zp8wj" (UID: "b0006185-2ba4-4684-9499-1e40147c1724")
Nov 29 09:31:25 minikube kubelet[2131]: I1129 09:31:25.972585 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b0006185-2ba4-4684-9499-1e40147c1724-xtables-lock") pod "kube-proxy-zp8wj" (UID: "b0006185-2ba4-4684-9499-1e40147c1724")
Nov 29 09:31:25 minikube kubelet[2131]: I1129 09:31:25.972717 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-9n29q" (UniqueName: "kubernetes.io/secret/b0006185-2ba4-4684-9499-1e40147c1724-kube-proxy-token-9n29q") pod "kube-proxy-zp8wj" (UID: "b0006185-2ba4-4684-9499-1e40147c1724")
Nov 29 09:31:25 minikube kubelet[2131]: I1129 09:31:25.973230 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b0006185-2ba4-4684-9499-1e40147c1724-kube-proxy") pod "kube-proxy-zp8wj" (UID: "b0006185-2ba4-4684-9499-1e40147c1724")
Nov 29 09:31:34 minikube kubelet[2131]: I1129 09:31:34.997455 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 29 09:31:35 minikube kubelet[2131]: I1129 09:31:35.084643 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-9j9gc" (UniqueName: "kubernetes.io/secret/a46f4729-2f38-4d31-bd0f-edf82bd612bd-coredns-token-9j9gc") pod "coredns-f9fd979d6-rw8s8" (UID: "a46f4729-2f38-4d31-bd0f-edf82bd612bd")
Nov 29 09:31:35 minikube kubelet[2131]: I1129 09:31:35.084678 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a46f4729-2f38-4d31-bd0f-edf82bd612bd-config-volume") pod "coredns-f9fd979d6-rw8s8" (UID: "a46f4729-2f38-4d31-bd0f-edf82bd612bd")
Nov 29 09:31:35 minikube kubelet[2131]: W1129 09:31:35.703376 2131 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-rw8s8 through plugin: invalid network status for
Nov 29 09:31:36 minikube kubelet[2131]: W1129 09:31:36.131610 2131 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-rw8s8 through plugin: invalid network status for
Nov 29 09:31:38 minikube kubelet[2131]: I1129 09:31:38.996396 2131 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 29 09:31:39 minikube kubelet[2131]: I1129 09:31:39.088898 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-8whpk" (UniqueName: "kubernetes.io/secret/8308b99d-c9b9-40c9-852c-374039edeff5-storage-provisioner-token-8whpk") pod "storage-provisioner" (UID: "8308b99d-c9b9-40c9-852c-374039edeff5")
Nov 29 09:31:39 minikube kubelet[2131]: I1129 09:31:39.088942 2131 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/8308b99d-c9b9-40c9-852c-374039edeff5-tmp") pod "storage-provisioner" (UID: "8308b99d-c9b9-40c9-852c-374039edeff5")

==> storage-provisioner [c814c6f73d68] <==
I1129 09:31:39.686820 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1129 09:31:39.692380 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1129 09:31:39.692803 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_62ea6c00-3a6b-404d-ac19-eed305caaa58!
I1129 09:31:39.692936 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58150db4-cbae-4805-8da3-11bd133ea991", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_62ea6c00-3a6b-404d-ac19-eed305caaa58 became leader
I1129 09:31:39.793442 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_62ea6c00-3a6b-404d-ac19-eed305caaa58!

@RA489
Copy link

RA489 commented Dec 1, 2020

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Dec 1, 2020
@RA489 RA489 added the co/docker-driver Issues related to kubernetes in container label Dec 1, 2020
@bkahlerventer
Copy link

bkahlerventer commented Dec 1, 2020

I do not think minikube handles/likes the redirect:

~> curl https://k8s.gcr.io
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="https://cloud.google.com/container-registry/">here</A>.
</BODY></HTML>

@cmarquezrusso
Copy link

cmarquezrusso commented Dec 3, 2020

Same issue here. I tried 1.15.1 and 1.12.3

I am not behind a proxy. I sshed into the minikube instance and I am not even able to resolve / connect to google. Resolvconf is pointing to 192.168.64.1

kubectl get events

Failed to pull image "jettech/kube-webhook-certgen:v1.3.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.44:60147->192.168.64.1:53: i/o timeout

OS: MacOs Mojave

Error:

💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/

Minikube logs

[...]
I1203 13:30:00.069998   22265 preload.go:105] Found local preload: /Users/cristian/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4
I1203 13:30:00.070126   22265 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I1203 13:30:10.091456   22265 ssh_runner.go:188] Completed: curl -sS -m 2 https://k8s.gcr.io/: (10.062157611s)
I1203 13:30:10.091487   22265 ssh_runner.go:188] Completed: docker images --format {{.Repository}}:{{.Tag}}: (10.021275564s)
W1203 13:30:10.091500   22265 start.go:505] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 28
stdout:
stderr:
curl: (28) Resolving timed out after 2000 milliseconds
I1203 13:30:10.091515   22265 docker.go:381] Got preloaded images:
I1203 13:30:10.091523   22265 docker.go:386] k8s.gcr.io/kube-proxy:v1.18.3 wasn't preloaded
W1203 13:30:10.091603   22265 out.go:151] ❗  This VM is having trouble accessing https://k8s.gcr.io
❗  This VM is having trouble accessing https://k8s.gcr.io
W1203 13:30:10.091634   22265 out.go:151] 💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
./minikube-darwin-amd64 ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ curl -sS -m 2 https://k8s.gcr.io/
curl: (28) Resolving timed out after 2000 milliseconds

@bkahlerventer
Copy link

I solved my problem by stopping minikube, delete the minikube docker container, delete the minikube image, remove the minikube commandline app, then starting fresh with the same versions, then problem was solved.

Looks like some configuration that got saved to the minikube container.

@cmarquezrusso
Copy link

I am leaving a recording of my terminal showing the error. I think it's related to the DNS configuration on the Hyperkit VM.

https://asciinema.org/a/377638

It works fine with both Docker and Virtualbox drivers.

@cmarquezrusso
Copy link

This seems to be related with #3036

@cmarquezrusso
Copy link

cmarquezrusso commented Dec 9, 2020

The issue was related to a VPN Kernel extension for MacOs that disables communication with the Hyperkit endpoint (192.168.64.1)

If you need to troubleshoot this, try opening a port on your host machine nc -l 8080 and try connecting to that port from the minikube instance. curl -v 192.168.64.1:8080

My fix was to uninstall the VPN Client.

I think this can be closed

@cmarquezrusso
Copy link

I had the same issue on Windows using hyper-v. The connection between minikube and the hyper-v eth interface was blocked by a firewall. To fix the issue, disable the firewall and test running a webserver on the windows host and try connecting to it using the hyper-v ip address (it should be same as the nameserver on /etc/resolv.conf)

Workaround: Uninstall hyper-v and use virtualbox.

@k8s-ci-robot
Copy link
Contributor

@Cristian04: The label(s) kind/hyper-v cannot be applied, because the repository doesn't have them

In response to this:

/kind hyper-v

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@santhoshcameo
Copy link

having the same when tried to run with Gvisor runtime via minikube

ubuntu@gvisor-san:~$ minikube start --container-runtime=containerd  \
>     --docker-opt containerd=/var/run/containerd/containerd.sock
😄  minikube v1.18.1 on Ubuntu 18.04 (amd64)
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🌐  Found network options:
    ▪ NO_PROXY=localhost,127.0.0.1,169.254.169.254,dkfz-heidelberg.de,192.168.49.2,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
    ▪ http_proxy=http://www-int2.dkfz-heidelberg.de:80
    ▪ https_proxy=http://www-int2.dkfz-heidelberg.de:80
    ▪ no_proxy=localhost,127.0.0.1,169.254.169.254,dkfz-heidelberg.de,192.168.49.2,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
❗  This container is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
📦  Preparing Kubernetes v1.20.2 on containerd 1.4.3 ...
    ▪ opt containerd=/var/run/containerd/containerd.sock
    ▪ env NO_PROXY=localhost,127.0.0.1,169.254.169.254,dkfz-heidelberg.de,192.168.49.2,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
    ▪ env HTTP_PROXY=http://www-int2.dkfz-heidelberg.de:80
    ▪ env HTTPS_PROXY=http://www-int2.dkfz-heidelberg.de:80
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/gvisor-addon:3
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🔎  Verifying gvisor addon...
🌟  Enabled addons: storage-provisioner, default-storageclass, gvisor
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@adsilva-ivre
Copy link

I0421 16:17:31.481481   46649 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0421 16:17:31.508473   46649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49182 SSHKeyPath:/home/aviadmin/.minikube/machines/minikube/id_rsa Username:docker}
I0421 16:17:31.508473   46649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49182 SSHKeyPath:/home/aviadmin/.minikube/machines/minikube/id_rsa Username:docker}
/ I0421 16:17:41.606540   46649 ssh_runner.go:189] Completed: systemctl --version: (10.1251636s)
I0421 16:17:41.606693   46649 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0421 16:17:41.606562   46649 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (10.1250974s)
W0421 16:17:41.606874   46649 start.go:627] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 28
stdout:

stderr:
curl: (28) Resolving timed out after 2000 milliseconds
W0421 16:17:41.607062   46649 out.go:222] ❗  This container is having trouble accessing https://k8s.gcr.io

❗  This container is having trouble accessing https://k8s.gcr.io
W0421 16:17:41.607240   46649 out.go:222] 💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0421 16:17:41.617908   46649 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0421 16:17:41.624199   46649 cruntime.go:219] skipping containerd shutdown because we are bound to it
I0421 16:17:41.624259   46649 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0421 16:17:41.630036   46649 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"

same issue with Docker installed in Ubuntu 20.04 in WSL2(Windows)

@spowelljr spowelljr added the long-term-support Long-term support issues that can't be fixed in code label May 26, 2021
@timscottbell
Copy link

I just started experiencing this problem this morning on Ubuntu 20.04 using vm-driver=docker. What is the workaround?

@andriyDev
Copy link
Contributor

@timscottbell From other comments, this seems to be caused by a VPN extension. Do you have any VPN installed?

@danstancu
Copy link

danstancu commented Aug 5, 2021

check /etc/docker/daemon.json.
if
{
"iptables": false
}
set it to true.
More info: https://www.mkubaczyk.com/2017/09/05/force-docker-not-bypass-ufw-rules-ubuntu-16-04/

@sbollap1
Copy link

sbollap1 commented Nov 7, 2021

I just started experiencing this problem this morning on Ubuntu 20.04 using vm-driver=docker. What is the workaround?

I am facing the same issue

@shiva1333
Copy link

Same issue for me as well on macos

@ILyaCyclone
Copy link

ILyaCyclone commented Nov 13, 2021

minikube delete
minikube start

solved it for me. Apparently, this solution is not very suitable.
(Win 10, hyperv)

@SanjayDatta
Copy link

Yes. delete minikube container --> remove minikube image --> minikube start made the problem go away. So if there is an existing image, update may be a problem. Hence the error message.

@jcostom
Copy link

jcostom commented Feb 25, 2022

I was just experiencing this exact problem on macOS 11.6.4 using the freshly installed combination of:

  • Docker 20.10.12
  • Hyperkit 0.20200908
  • Minikube 1.25.2

All straight out of Homebrew. The fix? I went into the Security & Privacy System Preferences Panel, Firewall Tab, Firewall Options. I deactivated "Stealth Mode". Everything worked like a charm from that point forward.

(I did try deleting and re-creating the Minikube instance, as suggested above, which didn't do a blessed thing)

@teja463
Copy link

teja463 commented Mar 8, 2022

Same issue running minikube on Windows 10 with hyperv driver. Below is the Warning I got on starting of the cluster
PS C:\WINDOWS\system32> minikube start minikube v1.25.2 on Microsoft Windows 10 Enterprise 10.0.19042 Build 19042 Using the hyperv driver based on existing profile Starting control plane node minikube in cluster minikube Updating the running hyperv "minikube" VM ... This VM is having trouble accessing https://k8s.gcr.io To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ Preparing Kubernetes v1.23.3 on Docker 20.10.12 ... kubelet.housekeeping-interval=5m

@sharifelgamal sharifelgamal removed their assignment Apr 13, 2022
@medyagh
Copy link
Member

medyagh commented Apr 13, 2022

@OkayJosh do you still have this issue?

@RichardLee0211
Copy link

RichardLee0211 commented Jun 13, 2022

I am having the same issue with Ubuntu, couldn't pass get start page and pull any image due to this network issue.

 $ minikube start
😄  minikube v1.25.2 on Ubuntu 22.04
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
❗  This container is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
    ▪ kubelet.housekeeping-interval=5m
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

 $ minikube ssh
docker@minikube:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
13 packets transmitted, 0 received, 100% packet loss, time 12287ms

@teja463
Copy link

teja463 commented Jun 14, 2022

I used docker driver and it worked fine.

@JoshTheDeveloperr
Copy link

I'm on the docker driver and I'm getting this exact issue right now.

@rhythmshandlya
Copy link

image

Facing similar issue:
Docker : 20.10.17
minikube : 1.26.0
OS: Ubuntu 22.04 LTS

@tAnboyy
Copy link

tAnboyy commented Jul 26, 2022

Since there's a few people posting here I'd like to quickly share that I've encountered this error a few times.

  • A couple of times I didn't notice I had OpenVPN trying to connect to unresponsive server (hidden terminal window)
  • Once the problem seemed to be docker, restarting the service helped
  • Never happened because of Minikube being broken

Restarting docker did the trick for me. Thanks!

@yitzikz
Copy link

yitzikz commented Jul 28, 2022

Since there's a few people posting here I'd like to quickly share that I've encountered this error a few times.

  • A couple of times I didn't notice I had OpenVPN trying to connect to unresponsive server (hidden terminal window)
  • Once the problem seemed to be docker, restarting the service helped
  • Never happened because of Minikube being broken

Restarting docker did the trick for me. Thanks!

Do you use docker minikube or latest minikube from kubernetes site?

@mavericks013
Copy link

I have given up on minikube. Installed 'kind' and everything works like a charm.

@tAnboyy
Copy link

tAnboyy commented Jul 28, 2022

Since there's a few people posting here I'd like to quickly share that I've encountered this error a few times.

  • A couple of times I didn't notice I had OpenVPN trying to connect to unresponsive server (hidden terminal window)
  • Once the problem seemed to be docker, restarting the service helped
  • Never happened because of Minikube being broken

Restarting docker did the trick for me. Thanks!

Do you use docker minikube or latest minikube from kubernetes site?

the latter

@tankilo
Copy link

tankilo commented Sep 6, 2022

I also came across this problem today!
I find that i can pull image on the host machine, but i can't in the minikube docker container.
I didn't find the way to pass the dns settings via minikube start command, so i mannualy change the content of the /etc/resolv.conf in the conatiner to thoses on the host machine. And the problem is solved.

@ozymandias89
Copy link

I resolved in Windows with Docker Desktop.
Procedure:

  1. minikube stop
  2. delete minikube docker container
  3. minikube start

@JustinKirkJohnson
Copy link

I experienced this issue, the root cause was security software (zscalar) running on my workstation that was injecting it's certificate as the root CA for TLS certificates causing a certificate chain validation failure. I was able to determine this by:

  • Running an interactive Bash shell in the container, docker exec -it 8988ee1de268 "bash"
  • Testing the handshake with k8s.gcr.io, openssl s_client -connect k8s.gcr.io:443

The certificate chain that was presented showed the CA and Intermediate CA as being CN = Zscaler Root CA and CN = Zscaler Intermediate Root CA respectively.

I disabled the service on my workstation and voilà, everything worked.

@Hondough
Copy link

Since there's a few people posting here I'd like to quickly share that I've encountered this error a few times.

  • A couple of times I didn't notice I had OpenVPN trying to connect to unresponsive server (hidden terminal window)
  • Once the problem seemed to be docker, restarting the service helped
  • Never happened because of Minikube being broken

I had this issue today and it was due to VPN running. I turned off the VPN and stopped and started the cluster. Everything went smoothly. Thank you for saving me much time. I recommend to everybody to try this fix first before rebuilding your installations.

@yitzikz
Copy link

yitzikz commented Sep 29, 2022

I resolved in Windows with Docker Desktop. Procedure:

  1. minikube stop
  2. delete minikube docker container
  3. minikube start

didn't help to me

@amitmavgupta
Copy link

Use docker as the driver and it works seamlessly. No point using Hyper-V.

@s4pfyr
Copy link

s4pfyr commented Dec 15, 2022

Just found this, for me I could resolve the problem by removing the protocol from the proxy. I.e. I changes http://proxy:80 to just proxy:80

@shashwata27
Copy link

shashwata27 commented Feb 3, 2023

I have given up on minikube

@brandontan
Copy link

So what's the solution? I am still getting when I run minikube --vm-driver=docker --cni=calico
This container is having trouble accessing https://registry.k8s.io

@josvazg
Copy link

josvazg commented Feb 4, 2023

I was suffering this error, although I suspect this same error has many variants in different setups. In my case I use:
Ubuntu 22.04 with docker 23.0.0 with minikube 1.26.1

I think many of the problems people face here are related to the DNS resolution not working from the minikube container to its gateway in a custom docker network. At least that was what I was seeing, network and internet were reachable using bare IP addresses, but DNS was not.

After lots of Googling, including finding this page, I gave up that but came across this error message in my /var/log/syslog:

dockerd[155770]: time="2023-02-04T22:32:14.303114536+01:00" level=warning msg="[resolver] failed to read from DNS server: 172.17.0.1:53, query: ;k8s.gcr.io.\tIN\t A" error="read udp 172.17.0.1:50487->172.17.0.1:53: read: connection refused"

That meant that docker was trying to ask my system-resolved about that DNS resolution and was getting "connection refused".

BTW, my docker + systemd-resolved setup is also a bit special because I use a VPN and I need docker containers to use systemd-resolved to reach the VPN network when available. I managed to trick docker into using systemd-resolved by following this setup:
cohoe/workstation#105

Then I realized my other Ubuntu machine that was working fine had a slightly different config at /etc/docker/daemon.json, using a custom bip:

{ 
	"bip": "172.20.0.1/16",
	"dns": ["172.20.0.1"]
}

I changed systemd-resolved to do DNSStubListenerExtra=172.20.0.1 instead of 172.17.0.1 and restarted it, then changed /etc/docker/daemon.json as per above and got minikube 1.21.1 working!

If your issue seems also DNS related, it is worth a try to change the docker bip even if you are running on another different Linux or even a Mac. In my case, I did not identify where the conflict with IP 172.17.0.1 came from, but changing to 172.20.0.1 made the trick anyways.

Still, I suspect some other issues with this same error message might differ.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 8, 2023
@keithhays69
Copy link

On Ubuntu 22.04, this issue is caused by the Ubuntu firewall being on. I was not unable to figure out exactly what settings it should have to make the deploy work with it on, so I turned it off long enough to do the deploy and then turned it back on and it all worked fine.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 18, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 18, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sawantnitesh76ns
Copy link

Machine - Ubuntu 20.04
I was also facing same issue.
Tried to setup HTTP_PROXY, HTPS_PROXY, NO_PROXY

But then by restarting docker.socket and docke.service it worked fine

Steps:
minikube stop
minikube delete
sudo systemctl restart docker.socket docker.service

@cwegener
Copy link

In case anyone is running into this issue with the podman rootless driver:
You'll need to make sure you have aardvark-dns installed.

@Minutis
Copy link

Minutis commented Mar 6, 2024

In case anyone is running into this issue with the Ubuntu WSL on Windows:
Adding these two lines in /etc/resolv.conf

nameserver 8.8.8.8
nameserver 8.8.4.4

And then:

sudo systemctl daemon-reload
sudo systemctl restart docker

@pratikparshetti
Copy link

Machine - Windows
driver - docker

I'm getting this as a warning - "This container is having trouble accessing https://registry.k8s.io" when I run minikube start but minikube is getting started. But I'm not able to enable the ingress addons and I suspect that its because I'm not able to pull the image from registry.

I even tried to pull the images manually and then try further but still it did not help.

Can someone suggest any possible solution?

@esbc-disciple
Copy link

@josvazg's comment was indispensable. Got me going on the right track. A true legend.

In my case, I was using Debian 12.2 with minikube 1.32.0, Kubernetes 1.28.3, Docker 24.0.7.

If you get this error, it means you botched some kind of network config. In my case, I'm using an AWS instance and don't have it set to pull an IPv4 automatically (I'm assigning an Elastic IPv4 as needed). I had forgotten to assign the Elastic IP, so my dango EC2 instance had no way to reach the IPv4 internet.

If you get this error, think about how you set up your instance/server. Did you change DNS settings (as in the case of @josvazg)? Did you make sure you have IP-level network connectivity? Do your basic Linux connectivity commands: ping, ip addr show, etc.

@peterhoneder
Copy link

For people using minikube with hyperkit: check that in your firewall settings, the setting for "block all incoming connections" in the Details dialog is not enabled -> it will also block all traffic coming from the bridge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code
Projects
None yet
Development

No branches or pull requests