Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to download kic base image or any fallback image (unable to access gcr.io) #8997

Closed
gupf0719 opened this issue Aug 14, 2020 · 46 comments
Labels
cause/firewall-or-proxy When firewalls or proxies seem to be interfering co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@gupf0719
Copy link

~ minikube start

Pulling base image ...
E0814 15:09:05.833724 5268 cache.go:175] Error downloading kic artifacts: failed to download kic base image or any fallback image
🔥 Creating docker container (CPUs=2, Memory=6144MB) ...
🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -d /var/lib: exit status 125

~ docker pull gcr.io/k8s-minikube/kicbase:v0.0.11

Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

i can pull other images, just this failed

@tstromberg tstromberg changed the title minikube start failed failed to download kic base image or any fallback image (unable to access gcr.io) Aug 20, 2020
@tstromberg
Copy link
Contributor

Are you in China by any chance? If so, can you provide the output of minikube start --alsologtostderr? I believe we are supposed to fall-back to fetching this image from GitHub.

@medyagh - is that description of the fallback behavior accurate?

@tstromberg tstromberg added cause/firewall-or-proxy When firewalls or proxies seem to be interfering co/docker-driver Issues related to kubernetes in container labels Aug 20, 2020
@fancyerii
Copy link

Are you in China by any chance? If so, can you provide the output of minikube start --alsologtostderr? I believe we are supposed to fall-back to fetching this image from GitHub.

@medyagh - is that description of the fallback behavior accurate?

how to fall-back to Github?

@sharifelgamal
Copy link
Collaborator

Our code should automatically fallback to github once the call to GCR fails, which is why we wanted to see more detailed output from minikube start.

Try running minikube start --alsologtostderr and paste the output here so we can better debug your issue.

@sharifelgamal sharifelgamal added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. kind/support Categorizes issue or PR as a support question. labels Aug 26, 2020
@ualter
Copy link

ualter commented Sep 29, 2020

Hi, same problem:

``
I0929 14:27:58.941785 6495 start.go:112] virtualization: vbox host
I0929 14:27:58.953340 6495 out.go:109] 😄 minikube v1.13.1 on Ubuntu 20.04
😄 minikube v1.13.1 on Ubuntu 20.04
I0929 14:27:58.955436 6495 notify.go:126] Checking for updates...
I0929 14:27:58.961040 6495 driver.go:287] Setting default libvirt URI to qemu:///system
I0929 14:27:59.129774 6495 docker.go:98] docker version: linux-19.03.12
I0929 14:27:59.131274 6495 docker.go:130] overlay module found
I0929 14:27:59.141495 6495 out.go:109] ✨ Using the docker driver based on user configuration
✨ Using the docker driver based on user configuration
I0929 14:27:59.141614 6495 start.go:246] selected driver: docker
I0929 14:27:59.141622 6495 start.go:653] validating driver "docker" against
I0929 14:27:59.141642 6495 start.go:664] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:}
I0929 14:27:59.141917 6495 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0929 14:27:59.393670 6495 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0929 14:27:59.754946 6495 start_flags.go:224] no existing cluster config was found, will generate one from the flags
I0929 14:27:59.755252 6495 start_flags.go:242] Using suggested 2400MB memory alloc based on sys=9818MB, container=9818MB
I0929 14:27:59.755344 6495 start_flags.go:617] Wait components to verify : map[apiserver:true system_pods:true]
I0929 14:27:59.755357 6495 cni.go:74] Creating CNI manager for ""
I0929 14:27:59.755362 6495 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0929 14:27:59.755366 6495 start_flags.go:348] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b Memory:2400 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s}
I0929 14:27:59.835233 6495 out.go:109] 👍 Starting control plane node minikube in cluster minikube
👍 Starting control plane node minikube in cluster minikube
I0929 14:27:59.968396 6495 cache.go:119] Beginning downloading kic base image for docker with docker
I0929 14:27:59.980720 6495 out.go:109] 🚜 Pulling base image ...
🚜 Pulling base image ...
I0929 14:27:59.980871 6495 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
I0929 14:27:59.981125 6495 cache.go:142] Downloading gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b to local daemon
I0929 14:27:59.981132 6495 image.go:140] Writing gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b to local daemon
I0929 14:28:00.178428 6495 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
I0929 14:28:00.178535 6495 cache.go:53] Caching tarball of preloaded images
I0929 14:28:00.178592 6495 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
I0929 14:28:00.326924 6495 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
I0929 14:28:00.335715 6495 out.go:109] 💾 Downloading Kubernetes v1.19.2 preload ...
💾 Downloading Kubernetes v1.19.2 preload ...
I0929 14:28:00.335900 6495 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 -> /home/ualter/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
> preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 65.80 MiB I0929 14:28:06.171994 6495 cache.go:156] failed to download gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b, will try fallback image if available: writing daemon image: error loading image: error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images/load?quiet=0": error verifying sha256 checksum; got "79226535bd6445d1af476a2814a9b5e173a2356f3e18618e6572fbc3c4f03fed", want "d51af753c3d3a984351448ec0f85ddafc580680fd6dfce9f4b09fdb367ee1e3e"
I0929 14:28:06.185821 6495 cache.go:142] Downloading kicbase/stable:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b to local daemon
I0929 14:28:06.185859 6495 image.go:140] Writing kicbase/stable:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b to local daemon
> preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 206.23 MiBI0929 14:28:15.342240 6495 cache.go:156] failed to download kicbase/stable:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b, will try fallback image if available: writing daemon image: error loading image: error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images/load?quiet=0": error verifying sha256 checksum; got "d10763921e5d5f9b4f605e4edea70be5ca2c2b3c5b93f6a26bd74773bf88c0b4", want "d51af753c3d3a984351448ec0f85ddafc580680fd6dfce9f4b09fdb367ee1e3e"
I0929 14:28:15.342902 6495 cache.go:142] Downloading docker.pkg.github.com/kubernetes/minikube/kicbase:v0.0.12-snapshot3 to local daemon
I0929 14:28:15.343010 6495 image.go:140] Writing docker.pkg.github.com/kubernetes/minikube/kicbase:v0.0.12-snapshot3 to local daemon
> preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 211.39 MiBI0929 14:28:15.740807 6495 cache.go:156] failed to download docker.pkg.github.com/kubernetes/minikube/kicbase:v0.0.12-snapshot3, will try fallback image if available: GET https://docker.pkg.github.com/v2/kubernetes/minikube/kicbase/manifests/v0.0.12-snapshot3: UNAUTHORIZED: GitHub Docker Registry needs login
> preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 486.36 MiB
I0929 14:28:32.945472 6495 preload.go:160] saving checksum for preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 ...
I0929 14:28:33.101047 6495 preload.go:177] verifying checksumm of /home/ualter/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 ...
W0929 14:28:34.053690 6495 cache.go:59] Error downloading preloaded artifacts will continue without preload: verify: checksum of /home/ualter/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 does not match remote checksum (k�S�M��FA�"Ѭ� != �`l����yڑ�P)
I0929 14:28:34.055000 6495 cache.go:92] acquiring lock: {Name:mk142d6e6766d5773e72ebe4fa783981952620f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.055359 6495 image.go:168] retrieving image: kubernetesui/metrics-scraper:v1.0.4
I0929 14:28:34.055497 6495 cache.go:92] acquiring lock: {Name:mkb4247b6deb4d1856754559ae1afec63570c224 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.055605 6495 image.go:168] retrieving image: k8s.gcr.io/kube-proxy:v1.19.2
I0929 14:28:34.055703 6495 cache.go:92] acquiring lock: {Name:mk6419ef7ea4849fa8d951745dce5ad75e5e7312 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.055791 6495 image.go:168] retrieving image: k8s.gcr.io/kube-scheduler:v1.19.2
I0929 14:28:34.055875 6495 cache.go:92] acquiring lock: {Name:mk56697f8901446b111c41d93edee9511dcd07c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.055938 6495 image.go:168] retrieving image: k8s.gcr.io/kube-controller-manager:v1.19.2
I0929 14:28:34.056010 6495 cache.go:92] acquiring lock: {Name:mkd237761f10adb18e835ac49f01443c819cbedf Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.056072 6495 image.go:168] retrieving image: k8s.gcr.io/kube-apiserver:v1.19.2
I0929 14:28:34.056139 6495 cache.go:92] acquiring lock: {Name:mke174c42dcc2efe3e2d7e8140212b2f3c3a01dd Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.056198 6495 image.go:168] retrieving image: k8s.gcr.io/coredns:1.7.0
I0929 14:28:34.056295 6495 cache.go:92] acquiring lock: {Name:mk905b0f1eb01dba8a6eb562f1336d713c200942 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.056356 6495 image.go:168] retrieving image: k8s.gcr.io/etcd:3.4.13-0
I0929 14:28:34.056426 6495 cache.go:92] acquiring lock: {Name:mk7cb385eb8eb68dce10e2912658fa0218d7cd6c Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.056482 6495 image.go:168] retrieving image: k8s.gcr.io/pause:3.2
I0929 14:28:34.056558 6495 cache.go:92] acquiring lock: {Name:mk0e3a0d8d72f2e09ac4fee7c55e9f67f6633281 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.056660 6495 image.go:168] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v3
I0929 14:28:34.057279 6495 cache.go:92] acquiring lock: {Name:mke90538c5b5015184ab2393d886306ce17856fb Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.057348 6495 image.go:168] retrieving image: kubernetesui/dashboard:v2.0.3
I0929 14:28:34.058354 6495 profile.go:150] Saving config to /home/ualter/.minikube/profiles/minikube/config.json ...
I0929 14:28:34.058463 6495 lock.go:35] WriteFile acquiring /home/ualter/.minikube/profiles/minikube/config.json: {Name:mk20d4f963d71913b43e5bbdb6c1b7c9475f4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
E0929 14:28:34.058585 6495 cache.go:177] Error downloading kic artifacts: failed to download kic base image or any fallback image
I0929 14:28:34.059663 6495 cache.go:182] Successfully downloaded all kic artifacts
I0929 14:28:34.059773 6495 start.go:314] acquiring machines lock for minikube: {Name:mka214439089e40cd899813bbf642c19b1d410f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:34.059821 6495 start.go:318] acquired machines lock for "minikube" in 36.034µs
I0929 14:28:34.059840 6495 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b Memory:2400 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}
I0929 14:28:34.059884 6495 start.go:127] createHost starting for "" (driver="docker")
I0929 14:28:34.188366 6495 out.go:109] 🔥 Creating docker container (CPUs=2, Memory=2400MB) ...
🔥 Creating docker container (CPUs=2, Memory=2400MB) ...
I0929 14:28:34.188687 6495 start.go:164] libmachine.API.Create for "minikube" (driver="docker")
I0929 14:28:34.188871 6495 client.go:165] LocalClient.Create starting
I0929 14:28:34.188941 6495 main.go:115] libmachine: Creating CA: /home/ualter/.minikube/certs/ca.pem
I0929 14:28:34.060461 6495 image.go:176] daemon lookup for kubernetesui/dashboard:v2.0.3: Error response from daemon: reference does not exist
I0929 14:28:34.111179 6495 image.go:176] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v3: Error response from daemon: reference does not exist
I0929 14:28:34.111352 6495 image.go:176] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
I0929 14:28:34.111468 6495 image.go:176] daemon lookup for k8s.gcr.io/etcd:3.4.13-0: Error response from daemon: reference does not exist
I0929 14:28:34.111565 6495 image.go:176] daemon lookup for k8s.gcr.io/coredns:1.7.0: Error response from daemon: reference does not exist
I0929 14:28:34.111655 6495 image.go:176] daemon lookup for k8s.gcr.io/kube-apiserver:v1.19.2: Error response from daemon: reference does not exist
I0929 14:28:34.111751 6495 image.go:176] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.19.2: Error response from daemon: reference does not exist
I0929 14:28:34.111838 6495 image.go:176] daemon lookup for k8s.gcr.io/kube-scheduler:v1.19.2: Error response from daemon: reference does not exist
I0929 14:28:34.111922 6495 image.go:176] daemon lookup for k8s.gcr.io/kube-proxy:v1.19.2: Error response from daemon: reference does not exist
I0929 14:28:34.112006 6495 image.go:176] daemon lookup for kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: reference does not exist
I0929 14:28:34.636861 6495 main.go:115] libmachine: Creating client certificate: /home/ualter/.minikube/certs/cert.pem
I0929 14:28:34.648799 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v3
I0929 14:28:34.812905 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0
I0929 14:28:34.839944 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.19.2
I0929 14:28:34.840416 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/pause_3.2
I0929 14:28:34.840600 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.19.2
I0929 14:28:34.847028 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0
I0929 14:28:34.870246 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.19.2
I0929 14:28:34.871577 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.19.2
I0929 14:28:35.173888 6495 cli_runner.go:110] Run: docker ps -a --format {{.Names}}
I0929 14:28:35.602818 6495 cli_runner.go:110] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0929 14:28:35.649772 6495 cache.go:129] /home/ualter/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
I0929 14:28:35.650410 6495 cache.go:81] cache image "k8s.gcr.io/pause:3.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 1.59398104s
I0929 14:28:35.650433 6495 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> /home/ualter/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
I0929 14:28:36.215582 6495 oci.go:101] Successfully created a docker volume minikube
I0929 14:28:36.216584 6495 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib
I0929 14:28:36.893987 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/kubernetesui/dashboard_v2.0.3
I0929 14:28:37.273833 6495 cache.go:81] cache image "k8s.gcr.io/kube-scheduler:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.19.2" took 3.218129806s
E0929 14:28:37.274363 6495 cache.go:63] save image to file "k8s.gcr.io/kube-scheduler:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.19.2" failed: write: error verifying sha256 checksum; got "5fcd5f2b9686bb50953f382d1cfc9affd72eb6fc4f47dad7f33b31c3407002e8", want "a84ff2cd01b7f36e94f385564d1f35b2e160c197fa58cfd20373accf17b34b5e"
I0929 14:28:38.139280 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.4
I0929 14:28:38.296904 6495 cache.go:81] cache image "gcr.io/k8s-minikube/storage-provisioner:v3" -> "/home/ualter/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v3" took 4.240347041s
E0929 14:28:38.297411 6495 cache.go:63] save image to file "gcr.io/k8s-minikube/storage-provisioner:v3" -> "/home/ualter/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v3" failed: write: unexpected EOF
I0929 14:28:38.297534 6495 cache.go:81] cache image "k8s.gcr.io/kube-controller-manager:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.19.2" took 4.241665384s
E0929 14:28:38.297547 6495 cache.go:63] save image to file "k8s.gcr.io/kube-controller-manager:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.19.2" failed: write: Get "https://storage.googleapis.com/eu.artifacts.k8s-artifacts-prod.appspot.com/containers/images/sha256:6611976957bfc0e5b65d5d47e4f32015f2991ce8ed5ed5401ae37b019881fa2c": unexpected EOF
I0929 14:28:38.297607 6495 cache.go:81] cache image "k8s.gcr.io/kube-proxy:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.19.2" took 4.242121142s
E0929 14:28:38.297617 6495 cache.go:63] save image to file "k8s.gcr.io/kube-proxy:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.19.2" failed: write: unexpected EOF
I0929 14:28:38.297653 6495 cache.go:81] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 4.241518812s
E0929 14:28:38.297661 6495 cache.go:63] save image to file "k8s.gcr.io/coredns:1.7.0" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" failed: write: unexpected EOF
I0929 14:28:38.297958 6495 cache.go:81] cache image "k8s.gcr.io/kube-apiserver:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.19.2" took 4.241954662s
E0929 14:28:38.300156 6495 cache.go:63] save image to file "k8s.gcr.io/kube-apiserver:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.19.2" failed: write: unexpected EOF
I0929 14:28:38.324046 6495 cache.go:81] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 4.267738272s
E0929 14:28:38.338377 6495 cache.go:63] save image to file "k8s.gcr.io/etcd:3.4.13-0" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" failed: write: unexpected EOF
I0929 14:28:46.348295 6495 cache.go:81] cache image "kubernetesui/metrics-scraper:v1.0.4" -> "/home/ualter/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.4" took 12.293304673s
E0929 14:28:46.351540 6495 cache.go:63] save image to file "kubernetesui/metrics-scraper:v1.0.4" -> "/home/ualter/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.4" failed: write: error verifying sha256 checksum; got "8ca3615233491836e042e243eeb62bbf93c8fe6c5876b57aadb639cfe77d3adb", want "1f8ea7f93b39dd928d9ce4eb9683058b8aac4434735003fe332c4dde92e3dbd3"
I0929 14:28:48.782260 6495 cache.go:81] cache image "kubernetesui/dashboard:v2.0.3" -> "/home/ualter/.minikube/cache/images/kubernetesui/dashboard_v2.0.3" took 14.724984758s
E0929 14:28:48.787589 6495 cache.go:63] save image to file "kubernetesui/dashboard:v2.0.3" -> "/home/ualter/.minikube/cache/images/kubernetesui/dashboard_v2.0.3" failed: write: error verifying sha256 checksum; got "3ad3d7dd634b05c78d5fa2543ae68ff6e75fd71339c3264a69b0a0788725557e", want "d5ba0740de2a1168051342cb28dadfd73e356f41134ad7656f4fb4c7995325eb"
I0929 14:28:49.491893 6495 cli_runner.go:152] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: (13.274716988s)
I0929 14:28:49.491951 6495 client.go:168] LocalClient.Create took 15.303067591s
I0929 14:28:51.498897 6495 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0929 14:28:51.498969 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:28:51.580099 6495 retry.go:30] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:28:51.871968 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:28:52.061500 6495 retry.go:30] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:28:52.604832 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:28:52.717989 6495 retry.go:30] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:28:53.386394 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0929 14:28:53.572423 6495 start.go:258] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube

W0929 14:28:53.572546 6495 start.go:240] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:28:53.572574 6495 start.go:130] duration metric: createHost completed in 19.51268252s
I0929 14:28:53.572582 6495 start.go:81] releasing machines lock for "minikube", held for 19.512753378s
W0929 14:28:53.572606 6495 start.go:377] error starting host: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally
sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase
d51af753c3d3: Pulling fs layer
fc878cd0a91c: Pulling fs layer
6154df8ff988: Pulling fs layer
fee5db0ff82f: Pulling fs layer
5af1cb370982: Pulling fs layer
6f3edf07f47c: Pulling fs layer
fe50ecc0dda0: Pulling fs layer
bc07fdd7ade1: Pulling fs layer
9335d5e85dc9: Pulling fs layer
79a32115d2cd: Pulling fs layer
ad77e393caaf: Pulling fs layer
da3861d3792f: Pulling fs layer
3b8d4e4f5c3c: Pulling fs layer
450ef1e1251c: Pulling fs layer
20ba60eac76f: Pulling fs layer
79ddc9b35b83: Pulling fs layer
b46dc25c7350: Pulling fs layer
3d82425d9581: Pulling fs layer
282c83787e4c: Pulling fs layer
6db34ffebc70: Pulling fs layer
4e220af36774: Pulling fs layer
a34b4acb4482: Pulling fs layer
fd68ba8cf361: Pulling fs layer
2ac166461221: Pulling fs layer
668caf51a011: Pulling fs layer
2b434031e1fa: Pulling fs layer
9c5e658b1181: Pulling fs layer
dfcb7e7f8f59: Pulling fs layer
20ba60eac76f: Waiting
79ddc9b35b83: Waiting
b46dc25c7350: Waiting
3d82425d9581: Waiting
282c83787e4c: Waiting
6db34ffebc70: Waiting
4e220af36774: Waiting
a34b4acb4482: Waiting
fd68ba8cf361: Waiting
2ac166461221: Waiting
668caf51a011: Waiting
2b434031e1fa: Waiting
9c5e658b1181: Waiting
dfcb7e7f8f59: Waiting
fee5db0ff82f: Waiting
5af1cb370982: Waiting
6f3edf07f47c: Waiting
fe50ecc0dda0: Waiting
bc07fdd7ade1: Waiting
9335d5e85dc9: Waiting
ad77e393caaf: Waiting
da3861d3792f: Waiting
3b8d4e4f5c3c: Waiting
450ef1e1251c: Waiting
79a32115d2cd: Waiting
6154df8ff988: Verifying Checksum
6154df8ff988: Download complete
fc878cd0a91c: Verifying Checksum
fc878cd0a91c: Download complete
fee5db0ff82f: Verifying Checksum
fee5db0ff82f: Download complete
5af1cb370982: Verifying Checksum
5af1cb370982: Download complete
d51af753c3d3: Verifying Checksum
bc07fdd7ade1: Verifying Checksum
docker: filesystem layer verification failed for digest sha256:bc07fdd7ade1f4873668f3d205fbb3ea9fc3689cd5497ba23e35a1e4bce697bf.
See 'docker run --help'.
I0929 14:28:53.573238 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:28:53.683018 6495 delete.go:82] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
W0929 14:28:53.683217 6495 out.go:145] 🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally
sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase
d51af753c3d3: Pulling fs layer
fc878cd0a91c: Pulling fs layer
6154df8ff988: Pulling fs layer
fee5db0ff82f: Pulling fs layer
5af1cb370982: Pulling fs layer
6f3edf07f47c: Pulling fs layer
fe50ecc0dda0: Pulling fs layer
bc07fdd7ade1: Pulling fs layer
9335d5e85dc9: Pulling fs layer
79a32115d2cd: Pulling fs layer
ad77e393caaf: Pulling fs layer
da3861d3792f: Pulling fs layer
3b8d4e4f5c3c: Pulling fs layer
450ef1e1251c: Pulling fs layer
20ba60eac76f: Pulling fs layer
79ddc9b35b83: Pulling fs layer
b46dc25c7350: Pulling fs layer
3d82425d9581: Pulling fs layer
282c83787e4c: Pulling fs layer
6db34ffebc70: Pulling fs layer
4e220af36774: Pulling fs layer
a34b4acb4482: Pulling fs layer
fd68ba8cf361: Pulling fs layer
2ac166461221: Pulling fs layer
668caf51a011: Pulling fs layer
2b434031e1fa: Pulling fs layer
9c5e658b1181: Pulling fs layer
dfcb7e7f8f59: Pulling fs layer
20ba60eac76f: Waiting
79ddc9b35b83: Waiting
b46dc25c7350: Waiting
3d82425d9581: Waiting
282c83787e4c: Waiting
6db34ffebc70: Waiting
4e220af36774: Waiting
a34b4acb4482: Waiting
fd68ba8cf361: Waiting
2ac166461221: Waiting
668caf51a011: Waiting
2b434031e1fa: Waiting
9c5e658b1181: Waiting
dfcb7e7f8f59: Waiting
fee5db0ff82f: Waiting
5af1cb370982: Waiting
6f3edf07f47c: Waiting
fe50ecc0dda0: Waiting
bc07fdd7ade1: Waiting
9335d5e85dc9: Waiting
ad77e393caaf: Waiting
da3861d3792f: Waiting
3b8d4e4f5c3c: Waiting
450ef1e1251c: Waiting
79a32115d2cd: Waiting
6154df8ff988: Verifying Checksum
6154df8ff988: Download complete
fc878cd0a91c: Verifying Checksum
fc878cd0a91c: Download complete
fee5db0ff82f: Verifying Checksum
fee5db0ff82f: Download complete
5af1cb370982: Verifying Checksum
5af1cb370982: Download complete
d51af753c3d3: Verifying Checksum
bc07fdd7ade1: Verifying Checksum
docker: filesystem layer verification failed for digest sha256:bc07fdd7ade1f4873668f3d205fbb3ea9fc3689cd5497ba23e35a1e4bce697bf.
See 'docker run --help'.

🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally
sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase
d51af753c3d3: Pulling fs layer
fc878cd0a91c: Pulling fs layer
6154df8ff988: Pulling fs layer
fee5db0ff82f: Pulling fs layer
5af1cb370982: Pulling fs layer
6f3edf07f47c: Pulling fs layer
fe50ecc0dda0: Pulling fs layer
bc07fdd7ade1: Pulling fs layer
9335d5e85dc9: Pulling fs layer
79a32115d2cd: Pulling fs layer
ad77e393caaf: Pulling fs layer
da3861d3792f: Pulling fs layer
3b8d4e4f5c3c: Pulling fs layer
450ef1e1251c: Pulling fs layer
20ba60eac76f: Pulling fs layer
79ddc9b35b83: Pulling fs layer
b46dc25c7350: Pulling fs layer
3d82425d9581: Pulling fs layer
282c83787e4c: Pulling fs layer
6db34ffebc70: Pulling fs layer
4e220af36774: Pulling fs layer
a34b4acb4482: Pulling fs layer
fd68ba8cf361: Pulling fs layer
2ac166461221: Pulling fs layer
668caf51a011: Pulling fs layer
2b434031e1fa: Pulling fs layer
9c5e658b1181: Pulling fs layer
dfcb7e7f8f59: Pulling fs layer
20ba60eac76f: Waiting
79ddc9b35b83: Waiting
b46dc25c7350: Waiting
3d82425d9581: Waiting
282c83787e4c: Waiting
6db34ffebc70: Waiting
4e220af36774: Waiting
a34b4acb4482: Waiting
fd68ba8cf361: Waiting
2ac166461221: Waiting
668caf51a011: Waiting
2b434031e1fa: Waiting
9c5e658b1181: Waiting
dfcb7e7f8f59: Waiting
fee5db0ff82f: Waiting
5af1cb370982: Waiting
6f3edf07f47c: Waiting
fe50ecc0dda0: Waiting
bc07fdd7ade1: Waiting
9335d5e85dc9: Waiting
ad77e393caaf: Waiting
da3861d3792f: Waiting
3b8d4e4f5c3c: Waiting
450ef1e1251c: Waiting
79a32115d2cd: Waiting
6154df8ff988: Verifying Checksum
6154df8ff988: Download complete
fc878cd0a91c: Verifying Checksum
fc878cd0a91c: Download complete
fee5db0ff82f: Verifying Checksum
fee5db0ff82f: Download complete
5af1cb370982: Verifying Checksum
5af1cb370982: Download complete
d51af753c3d3: Verifying Checksum
bc07fdd7ade1: Verifying Checksum
docker: filesystem layer verification failed for digest sha256:bc07fdd7ade1f4873668f3d205fbb3ea9fc3689cd5497ba23e35a1e4bce697bf.
See 'docker run --help'.

I0929 14:28:53.683266 6495 start.go:392] Will try again in 5 seconds ...
I0929 14:28:58.684581 6495 start.go:314] acquiring machines lock for minikube: {Name:mka214439089e40cd899813bbf642c19b1d410f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0929 14:28:58.684805 6495 start.go:318] acquired machines lock for "minikube" in 81.948µs
I0929 14:28:58.684920 6495 start.go:94] Skipping create...Using existing machine configuration
I0929 14:28:58.685091 6495 fix.go:54] fixHost starting:
I0929 14:28:58.685377 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:28:58.795454 6495 fix.go:107] recreateIfNeeded on minikube: state= err=unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:28:58.795634 6495 fix.go:112] machineExists: false. err=machine does not exist
I0929 14:28:58.822458 6495 out.go:109] 🤷 docker "minikube" container is missing, will recreate.
🤷 docker "minikube" container is missing, will recreate.
I0929 14:28:58.822490 6495 delete.go:124] DEMOLISHING minikube ...
I0929 14:28:58.822598 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
W0929 14:28:58.900872 6495 stop.go:75] unable to get state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:28:58.902201 6495 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:28:58.902695 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:28:59.000495 6495 delete.go:82] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:28:59.000649 6495 cli_runner.go:110] Run: docker container inspect -f {{.Id}} minikube
I0929 14:28:59.150118 6495 kic.go:275] could not find the container minikube to remove it. will try anyways
I0929 14:28:59.150435 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
W0929 14:28:59.287721 6495 oci.go:82] error getting container status, will try to delete anyways: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:28:59.288604 6495 cli_runner.go:110] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
I0929 14:28:59.408129 6495 oci.go:585] error shutdown minikube: docker exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:00.409769 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:29:00.504159 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:00.504183 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited
I0929 14:29:00.504199 6495 retry.go:30] will retry after 468.857094ms: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:00.973753 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:29:01.050559 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:01.051730 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited
I0929 14:29:01.051751 6495 retry.go:30] will retry after 693.478123ms: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:01.779802 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:29:01.869558 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:01.869772 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited
I0929 14:29:01.869801 6495 retry.go:30] will retry after 1.335175957s: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:03.207098 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:29:03.282909 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:03.283005 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited
I0929 14:29:03.283023 6495 retry.go:30] will retry after 954.512469ms: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:04.245969 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:29:04.339093 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:04.339274 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited
I0929 14:29:04.339295 6495 retry.go:30] will retry after 1.661814363s: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:06.003572 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:29:06.091355 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:06.092090 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited
I0929 14:29:06.092696 6495 retry.go:30] will retry after 2.266618642s: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:08.367039 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:29:08.443372 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:08.443526 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited
I0929 14:29:08.443566 6495 retry.go:30] will retry after 4.561443331s: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:13.011490 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:29:13.087924 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:13.088035 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited
I0929 14:29:13.088055 6495 retry.go:30] will retry after 8.67292976s: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:21.763573 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0929 14:29:21.889097 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:21.889208 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited
I0929 14:29:21.889236 6495 oci.go:86] couldn't shut down minikube (might be okay): verify shutdown: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: minikube

I0929 14:29:21.889309 6495 cli_runner.go:110] Run: docker rm -f -v minikube
W0929 14:29:21.968769 6495 delete.go:139] delete failed (probably ok)
I0929 14:29:21.968874 6495 fix.go:119] Sleeping 1 second for extra luck!
I0929 14:29:22.969221 6495 start.go:127] createHost starting for "" (driver="docker")
I0929 14:29:22.982951 6495 out.go:109] 🔥 Creating docker container (CPUs=2, Memory=2400MB) ...
🔥 Creating docker container (CPUs=2, Memory=2400MB) ...
I0929 14:29:22.983044 6495 start.go:164] libmachine.API.Create for "minikube" (driver="docker")
I0929 14:29:22.983073 6495 client.go:165] LocalClient.Create starting
I0929 14:29:22.983389 6495 main.go:115] libmachine: Reading certificate data from /home/ualter/.minikube/certs/ca.pem
I0929 14:29:22.984484 6495 main.go:115] libmachine: Decoding PEM data...
I0929 14:29:22.984503 6495 main.go:115] libmachine: Parsing certificate...
I0929 14:29:22.984865 6495 main.go:115] libmachine: Reading certificate data from /home/ualter/.minikube/certs/cert.pem
I0929 14:29:22.985247 6495 main.go:115] libmachine: Decoding PEM data...
I0929 14:29:22.985264 6495 main.go:115] libmachine: Parsing certificate...
I0929 14:29:22.986410 6495 cli_runner.go:110] Run: docker ps -a --format {{.Names}}
I0929 14:29:23.057674 6495 cli_runner.go:110] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0929 14:29:23.127613 6495 oci.go:101] Successfully created a docker volume minikube
I0929 14:29:23.127688 6495 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib
I0929 14:29:41.301011 6495 cli_runner.go:152] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: (18.17294451s)
I0929 14:29:41.301198 6495 client.go:168] LocalClient.Create took 18.318112482s
I0929 14:29:43.301790 6495 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0929 14:29:43.303010 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:29:43.458228 6495 retry.go:30] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:43.788380 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:29:43.868931 6495 retry.go:30] will retry after 267.848952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:44.138748 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:29:44.290257 6495 retry.go:30] will retry after 495.369669ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:44.787366 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:29:44.887414 6495 retry.go:30] will retry after 690.236584ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:45.578535 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0929 14:29:45.658394 6495 start.go:258] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube

W0929 14:29:45.658544 6495 start.go:240] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:45.658564 6495 start.go:130] duration metric: createHost completed in 22.68923508s
I0929 14:29:45.658637 6495 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0929 14:29:45.659238 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:29:45.755825 6495 retry.go:30] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:46.002610 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:29:46.127517 6495 retry.go:30] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:46.421940 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:29:46.525414 6495 retry.go:30] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:46.972838 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0929 14:29:47.119769 6495 retry.go:30] will retry after 994.852695ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:48.115810 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0929 14:29:48.204084 6495 start.go:258] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube

W0929 14:29:48.204191 6495 start.go:240] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:

stderr:
Error: No such container: minikube
I0929 14:29:48.204217 6495 fix.go:56] fixHost completed within 49.519128251s
I0929 14:29:48.204230 6495 start.go:81] releasing machines lock for "minikube", held for 49.51932234s
W0929 14:29:48.204396 6495 out.go:145] 😿 Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally
sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase
d51af753c3d3: Pulling fs layer
fc878cd0a91c: Pulling fs layer
6154df8ff988: Pulling fs layer
fee5db0ff82f: Pulling fs layer
5af1cb370982: Pulling fs layer
6f3edf07f47c: Pulling fs layer
fe50ecc0dda0: Pulling fs layer
bc07fdd7ade1: Pulling fs layer
9335d5e85dc9: Pulling fs layer
79a32115d2cd: Pulling fs layer
ad77e393caaf: Pulling fs layer
da3861d3792f: Pulling fs layer
3b8d4e4f5c3c: Pulling fs layer
450ef1e1251c: Pulling fs layer
20ba60eac76f: Pulling fs layer
79ddc9b35b83: Pulling fs layer
b46dc25c7350: Pulling fs layer
3d82425d9581: Pulling fs layer
282c83787e4c: Pulling fs layer
6db34ffebc70: Pulling fs layer
4e220af36774: Pulling fs layer
a34b4acb4482: Pulling fs layer
fd68ba8cf361: Pulling fs layer
2ac166461221: Pulling fs layer
668caf51a011: Pulling fs layer
2b434031e1fa: Pulling fs layer
9c5e658b1181: Pulling fs layer
dfcb7e7f8f59: Pulling fs layer
20ba60eac76f: Waiting
79ddc9b35b83: Waiting
b46dc25c7350: Waiting
3d82425d9581: Waiting
282c83787e4c: Waiting
6db34ffebc70: Waiting
4e220af36774: Waiting
a34b4acb4482: Waiting
fd68ba8cf361: Waiting
2ac166461221: Waiting
668caf51a011: Waiting
2b434031e1fa: Waiting
9c5e658b1181: Waiting
dfcb7e7f8f59: Waiting
bc07fdd7ade1: Waiting
9335d5e85dc9: Waiting
fee5db0ff82f: Waiting
5af1cb370982: Waiting
ad77e393caaf: Waiting
da3861d3792f: Waiting
3b8d4e4f5c3c: Waiting
450ef1e1251c: Waiting
79a32115d2cd: Waiting
6f3edf07f47c: Waiting
fe50ecc0dda0: Waiting
6154df8ff988: Verifying Checksum
6154df8ff988: Download complete
fc878cd0a91c: Verifying Checksum
fc878cd0a91c: Download complete
fee5db0ff82f: Verifying Checksum
fee5db0ff82f: Download complete
5af1cb370982: Verifying Checksum
5af1cb370982: Download complete
d51af753c3d3: Verifying Checksum
6f3edf07f47c: Verifying Checksum
bc07fdd7ade1: Verifying Checksum
9335d5e85dc9: Verifying Checksum
ad77e393caaf: Verifying Checksum
docker: filesystem layer verification failed for digest sha256:ad77e393caaf4f63b6323ab36f96d36f446e67e7c0cfc8358c9fa2b6e1c0def2.
See 'docker run --help'.

😿 Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally
sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase
d51af753c3d3: Pulling fs layer
fc878cd0a91c: Pulling fs layer
6154df8ff988: Pulling fs layer
fee5db0ff82f: Pulling fs layer
5af1cb370982: Pulling fs layer
6f3edf07f47c: Pulling fs layer
fe50ecc0dda0: Pulling fs layer
bc07fdd7ade1: Pulling fs layer
9335d5e85dc9: Pulling fs layer
79a32115d2cd: Pulling fs layer
ad77e393caaf: Pulling fs layer
da3861d3792f: Pulling fs layer
3b8d4e4f5c3c: Pulling fs layer
450ef1e1251c: Pulling fs layer
20ba60eac76f: Pulling fs layer
79ddc9b35b83: Pulling fs layer
b46dc25c7350: Pulling fs layer
3d82425d9581: Pulling fs layer
282c83787e4c: Pulling fs layer
6db34ffebc70: Pulling fs layer
4e220af36774: Pulling fs layer
a34b4acb4482: Pulling fs layer
fd68ba8cf361: Pulling fs layer
2ac166461221: Pulling fs layer
668caf51a011: Pulling fs layer
2b434031e1fa: Pulling fs layer
9c5e658b1181: Pulling fs layer
dfcb7e7f8f59: Pulling fs layer
20ba60eac76f: Waiting
79ddc9b35b83: Waiting
b46dc25c7350: Waiting
3d82425d9581: Waiting
282c83787e4c: Waiting
6db34ffebc70: Waiting
4e220af36774: Waiting
a34b4acb4482: Waiting
fd68ba8cf361: Waiting
2ac166461221: Waiting
668caf51a011: Waiting
2b434031e1fa: Waiting
9c5e658b1181: Waiting
dfcb7e7f8f59: Waiting
bc07fdd7ade1: Waiting
9335d5e85dc9: Waiting
fee5db0ff82f: Waiting
5af1cb370982: Waiting
ad77e393caaf: Waiting
da3861d3792f: Waiting
3b8d4e4f5c3c: Waiting
450ef1e1251c: Waiting
79a32115d2cd: Waiting
6f3edf07f47c: Waiting
fe50ecc0dda0: Waiting
6154df8ff988: Verifying Checksum
6154df8ff988: Download complete
fc878cd0a91c: Verifying Checksum
fc878cd0a91c: Download complete
fee5db0ff82f: Verifying Checksum
fee5db0ff82f: Download complete
5af1cb370982: Verifying Checksum
5af1cb370982: Download complete
d51af753c3d3: Verifying Checksum
6f3edf07f47c: Verifying Checksum
bc07fdd7ade1: Verifying Checksum
9335d5e85dc9: Verifying Checksum
ad77e393caaf: Verifying Checksum
docker: filesystem layer verification failed for digest sha256:ad77e393caaf4f63b6323ab36f96d36f446e67e7c0cfc8358c9fa2b6e1c0def2.
See 'docker run --help'.

I0929 14:29:48.225333 6495 out.go:109]

W0929 14:29:48.227573 6495 out.go:145] ❌ Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally
sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase
d51af753c3d3: Pulling fs layer
fc878cd0a91c: Pulling fs layer
6154df8ff988: Pulling fs layer
fee5db0ff82f: Pulling fs layer
5af1cb370982: Pulling fs layer
6f3edf07f47c: Pulling fs layer
fe50ecc0dda0: Pulling fs layer
bc07fdd7ade1: Pulling fs layer
9335d5e85dc9: Pulling fs layer
79a32115d2cd: Pulling fs layer
ad77e393caaf: Pulling fs layer
da3861d3792f: Pulling fs layer
3b8d4e4f5c3c: Pulling fs layer
450ef1e1251c: Pulling fs layer
20ba60eac76f: Pulling fs layer
79ddc9b35b83: Pulling fs layer
b46dc25c7350: Pulling fs layer
3d82425d9581: Pulling fs layer
282c83787e4c: Pulling fs layer
6db34ffebc70: Pulling fs layer
4e220af36774: Pulling fs layer
a34b4acb4482: Pulling fs layer
fd68ba8cf361: Pulling fs layer
2ac166461221: Pulling fs layer
668caf51a011: Pulling fs layer
2b434031e1fa: Pulling fs layer
9c5e658b1181: Pulling fs layer
dfcb7e7f8f59: Pulling fs layer
20ba60eac76f: Waiting
79ddc9b35b83: Waiting
b46dc25c7350: Waiting
3d82425d9581: Waiting
282c83787e4c: Waiting
6db34ffebc70: Waiting
4e220af36774: Waiting
a34b4acb4482: Waiting
fd68ba8cf361: Waiting
2ac166461221: Waiting
668caf51a011: Waiting
2b434031e1fa: Waiting
9c5e658b1181: Waiting
dfcb7e7f8f59: Waiting
bc07fdd7ade1: Waiting
9335d5e85dc9: Waiting
fee5db0ff82f: Waiting
5af1cb370982: Waiting
ad77e393caaf: Waiting
da3861d3792f: Waiting
3b8d4e4f5c3c: Waiting
450ef1e1251c: Waiting
79a32115d2cd: Waiting
6f3edf07f47c: Waiting
fe50ecc0dda0: Waiting
6154df8ff988: Verifying Checksum
6154df8ff988: Download complete
fc878cd0a91c: Verifying Checksum
fc878cd0a91c: Download complete
fee5db0ff82f: Verifying Checksum
fee5db0ff82f: Download complete
5af1cb370982: Verifying Checksum
5af1cb370982: Download complete
d51af753c3d3: Verifying Checksum
6f3edf07f47c: Verifying Checksum
bc07fdd7ade1: Verifying Checksum
9335d5e85dc9: Verifying Checksum
ad77e393caaf: Verifying Checksum
docker: filesystem layer verification failed for digest sha256:ad77e393caaf4f63b6323ab36f96d36f446e67e7c0cfc8358c9fa2b6e1c0def2.
See 'docker run --help'.

❌ Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally
sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase
d51af753c3d3: Pulling fs layer
fc878cd0a91c: Pulling fs layer
6154df8ff988: Pulling fs layer
fee5db0ff82f: Pulling fs layer
5af1cb370982: Pulling fs layer
6f3edf07f47c: Pulling fs layer
fe50ecc0dda0: Pulling fs layer
bc07fdd7ade1: Pulling fs layer
9335d5e85dc9: Pulling fs layer
79a32115d2cd: Pulling fs layer
ad77e393caaf: Pulling fs layer
da3861d3792f: Pulling fs layer
3b8d4e4f5c3c: Pulling fs layer
450ef1e1251c: Pulling fs layer
20ba60eac76f: Pulling fs layer
79ddc9b35b83: Pulling fs layer
b46dc25c7350: Pulling fs layer
3d82425d9581: Pulling fs layer
282c83787e4c: Pulling fs layer
6db34ffebc70: Pulling fs layer
4e220af36774: Pulling fs layer
a34b4acb4482: Pulling fs layer
fd68ba8cf361: Pulling fs layer
2ac166461221: Pulling fs layer
668caf51a011: Pulling fs layer
2b434031e1fa: Pulling fs layer
9c5e658b1181: Pulling fs layer
dfcb7e7f8f59: Pulling fs layer
20ba60eac76f: Waiting
79ddc9b35b83: Waiting
b46dc25c7350: Waiting
3d82425d9581: Waiting
282c83787e4c: Waiting
6db34ffebc70: Waiting
4e220af36774: Waiting
a34b4acb4482: Waiting
fd68ba8cf361: Waiting
2ac166461221: Waiting
668caf51a011: Waiting
2b434031e1fa: Waiting
9c5e658b1181: Waiting
dfcb7e7f8f59: Waiting
bc07fdd7ade1: Waiting
9335d5e85dc9: Waiting
fee5db0ff82f: Waiting
5af1cb370982: Waiting
ad77e393caaf: Waiting
da3861d3792f: Waiting
3b8d4e4f5c3c: Waiting
450ef1e1251c: Waiting
79a32115d2cd: Waiting
6f3edf07f47c: Waiting
fe50ecc0dda0: Waiting
6154df8ff988: Verifying Checksum
6154df8ff988: Download complete
fc878cd0a91c: Verifying Checksum
fc878cd0a91c: Download complete
fee5db0ff82f: Verifying Checksum
fee5db0ff82f: Download complete
5af1cb370982: Verifying Checksum
5af1cb370982: Download complete
d51af753c3d3: Verifying Checksum
6f3edf07f47c: Verifying Checksum
bc07fdd7ade1: Verifying Checksum
9335d5e85dc9: Verifying Checksum
ad77e393caaf: Verifying Checksum
docker: filesystem layer verification failed for digest sha256:ad77e393caaf4f63b6323ab36f96d36f446e67e7c0cfc8358c9fa2b6e1c0def2.
See 'docker run --help'.

W0929 14:29:48.233977 6495 out.go:145]

W0929 14:29:48.234037 6495 out.go:145] 😿 If the above advice does not help, please let us know:
😿 If the above advice does not help, please let us know:
W0929 14:29:48.234070 6495 out.go:145] 👉 https://github.com/kubernetes/minikube/issues/new/choose
👉 https://github.com/kubernetes/minikube/issues/new/choose
I0929 14:29:48.253091 6495 out.go:109]`
``

@jiawei666
Copy link

same problem

@qiaocco
Copy link

qiaocco commented Jan 25, 2021

same issue

@qiaocco
Copy link

qiaocco commented Jan 25, 2021

If you are in China , you can
use minikube start --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'

@baymax55
Copy link

I have the same problem

@baymax55
Copy link

If you are in China , you can
use minikube start --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'

Doesn't work

@Crachman
Copy link

Work!!!

(base) ➜ ~ minikube start --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
😄 Darwin 11.2.1 上的 minikube v1.17.1
✨ 根据现有的配置文件使用 docker 驱动程序
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🤷 docker "minikube" container is missing, will recreate.
🔥 Creating docker container (CPUs=2, Memory=3890MB) ...
🐳 正在 Docker 20.10.2 中准备 Kubernetes v1.20.2…
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@spowelljr
Copy link
Member

@baymax55 Can you expand on what's not working, are you still getting the same error as before or are you getting a new error now?

@tstromberg
Copy link
Contributor

I'm closing this issue as this issue should be fixed in recent releases of minikube ( v1.17+). If this issue does continue to exist in the most recent release of minikube, please feel free to re-open it by replying /reopen

If someone sees a similar issue to this one, please re-open it as replies to closed issues are unlikely to be noticed.

Thank you for opening the issue!

@vonbrand
Copy link

/reopen
Same on Fedora 33, x86_64, minikube-1.18.1

@k8s-ci-robot
Copy link
Contributor

@vonbrand: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen
Same on Fedora 33, x86_64, minikube-1.18.1

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@spowelljr spowelljr reopened this Mar 29, 2021
@sharifelgamal
Copy link
Collaborator

@vonbrand what error are you getting exactly?

@spowelljr spowelljr added long-term-support Long-term support issues that can't be fixed in code and removed triage/long-term-support labels May 19, 2021
@chris-ryu
Copy link

❯ minikube start
😄 minikube v1.20.0 on Darwin 11.4
▪ KUBECONFIG=/Users/chrisryu/.kube/config
🆕 Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
> gcr.io/k8s-minikube/kicbase...: 312.94 MiB / 312.94 MiB 100.00% 8.64 MiB
> gcr.io/k8s-minikube/kicbase...: 312.94 MiB / 312.94 MiB 100.00% 7.40 MiB
> index.docker.io/kicbase/sta...: 358.10 MiB / 358.10 MiB 100.00% 20.76 Mi
> index.docker.io/kicbase/sta...: 358.10 MiB / 358.10 MiB 100.00% 5.08 MiB
❗ minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3, but successfully downloaded kicbase/stable:v0.0.22 as a fallback image
E0606 21:58:39.655738 21709 cache.go:189] Error downloading kic artifacts: failed to download kic base image or any fallback image
🤷 docker "minikube" container is missing, will recreate.
🔥 Creating docker container (CPUs=2, Memory=1990MB) ...
🤦 StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).
See 'docker run --help'.

🤷 docker "minikube" container is missing, will recreate.
🔥 Creating docker container (CPUs=2, Memory=1990MB) ...
😿 Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).
See 'docker run --help'.

❌ Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).
See 'docker run --help'.

╭────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please attach the following file to the GitHub issue: │
│ - /Users/chrisryu/.minikube/logs/lastStart.txt │
│ │
╰────────────────────────────────────────────────────────────────────╯

~/workspace/Ausbildung  on chris-ryu/issue161 *20 ?1 

@bhundven
Copy link

bhundven commented Jun 9, 2021

I get the same thing with podman+minikube:

minikube-podman-error.txt

@baymax55
Copy link

baymax55 commented Jul 2, 2021

@baymax55 Can you expand on what's not working, are you still getting the same error as before or are you getting a new error now?

when I use the latest version for minikube, there is no such problem, thanks

@sharifelgamal
Copy link
Collaborator

Based on the error message posted here, it looks like there's a strange caching bug for kicbase when restarting an existing cluster. Can @bhundven or @chris-ryu confirm that this is still happening on minikube 1.22.0? For fresh starts, it looks like the fallback mechanism is working as intended.

@cuiko
Copy link

cuiko commented Jan 4, 2022

If you are in China , you can use minikube start --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'

In my case, I set the HTTPS_PROXY on the shell, it doesn't work.
but when I turn on set as system proxy option in the clash, it works for me. :(

@aonoa
Copy link

aonoa commented Jan 22, 2022

(base) minikube start --driver=podman
😄 minikube v1.25.1 on Ubuntu 20.04
✨ Using the podman driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.23.1 preload ...
> preloaded-images-k8s-v16-v1...: 281.97 MiB / 504.42 MiB 55.90% 86.20 KiB
> preloaded-images-k8s-v16-v1...: 504.42 MiB / 504.42 MiB 100.00% 585.62 K
E0122 11:53:53.241564 1042335 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=4000MB) ...
🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.29 -d /var/lib: exit status 125
stdout:

stderr:
Trying to pull gcr.io/k8s-minikube/kicbase:v0.0.29...
Error: initializing source docker://gcr.io/k8s-minikube/kicbase:v0.0.29: pinging container registry gcr.io: Get "https://gcr.io/v2/": dial tcp 64.233.189.82:443: i/o timeout

🔄 Restarting existing podman container for "minikube" ...
😿 Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube

❌ Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: sudo -n podman container inspect -f minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

It works for me!!!!

(base) minikube start --driver=podman --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
😄 minikube v1.25.1 on Ubuntu 20.04
✨ Using the podman driver based on user configuration
✅ Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
> registry.cn-hangzhou.aliyun...: 378.98 MiB / 378.98 MiB 100.00% 5.64 MiB
E0122 12:17:32.855630 1050848 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=4000MB) ...
🐳 Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
▪ kubelet.housekeeping-interval=5m
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@danieldonoghue
Copy link

danieldonoghue commented Jan 24, 2022

im not in china and I get the same issue...

 % minikube start --driver=podman —-container-runtime=cri-o --alsologtostderr
I0124 08:05:53.532101   20382 out.go:297] Setting OutFile to fd 1 ...
I0124 08:05:53.532209   20382 out.go:349] isatty.IsTerminal(1) = true
I0124 08:05:53.532212   20382 out.go:310] Setting ErrFile to fd 2...
I0124 08:05:53.532216   20382 out.go:349] isatty.IsTerminal(2) = true
I0124 08:05:53.532278   20382 root.go:315] Updating PATH: /Users/danield/.minikube/bin
I0124 08:05:53.532480   20382 out.go:304] Setting JSON to false
I0124 08:05:53.563745   20382 start.go:112] hostinfo: {"hostname":"****","uptime":604198,"bootTime":1642403755,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.1","kernelVersion":"21.2.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"****"}
W0124 08:05:53.563881   20382 start.go:120] gopshost.Virtualization returned error: not implemented yet
I0124 08:05:53.585159   20382 out.go:176] 😄  minikube v1.25.1 on Darwin 12.1 (arm64)
😄  minikube v1.25.1 on Darwin 12.1 (arm64)
I0124 08:05:53.585293   20382 notify.go:174] Checking for updates...
I0124 08:05:53.585535   20382 config.go:176] Loaded profile config "minikube": Driver=podman, ContainerRuntime=docker, KubernetesVersion=v1.23.1
I0124 08:05:53.585831   20382 driver.go:344] Setting default libvirt URI to qemu:///system
I0124 08:05:53.758862   20382 podman.go:121] podman version: 3.4.4
I0124 08:05:53.779200   20382 out.go:176] ✨  Using the podman (experimental) driver based on existing profile
✨  Using the podman (experimental) driver based on existing profile
I0124 08:05:53.779218   20382 start.go:280] selected driver: podman
I0124 08:05:53.779221   20382 start.go:795] validating driver "podman" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:1953 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.*.*.*/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:}
I0124 08:05:53.779316   20382 start.go:806] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0124 08:05:53.779334   20382 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
I0124 08:05:53.779476   20382 cli_runner.go:133] Run: podman system info --format json
I0124 08:05:53.877773   20382 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:247001088 MemTotal:2048376832 OCIRuntime:{Name:crun Package:crun-1.4-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.4
commit: 3daded072ef008ef0840e8eccb0b52a7efbd165d
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.15.10-200.fc35.aarch64 Os:linux Rootless:false Uptime:46h 56m 21.91s (Approximately 1.92 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/var/home/core/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/home/core/.local/share/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/run/user/1000/containers VolumePath:/var/home/core/.local/share/containers/storage/volumes}}
I0124 08:05:53.878118   20382 cni.go:93] Creating CNI manager for ""
I0124 08:05:53.878132   20382 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0124 08:05:53.878136   20382 start_flags.go:300] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:1953 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.*.*.*/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:}
I0124 08:05:53.896822   20382 out.go:176] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0124 08:05:53.896891   20382 cache.go:120] Beginning downloading kic base image for podman with docker
I0124 08:05:53.935207   20382 out.go:176] 🚜  Pulling base image ...
🚜  Pulling base image ...
I0124 08:05:53.935255   20382 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime docker
I0124 08:05:53.935288   20382 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b to local cache
I0124 08:05:53.935391   20382 preload.go:148] Found local preload: /Users/danield/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-arm64.tar.lz4
I0124 08:05:53.935413   20382 cache.go:57] Caching tarball of preloaded images
I0124 08:05:53.935466   20382 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory
I0124 08:05:53.935491   20382 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory, skipping pull
I0124 08:05:53.935498   20382 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in cache, skipping pull
I0124 08:05:53.935502   20382 preload.go:174] Found /Users/danield/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0124 08:05:53.935505   20382 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b as a tarball
I0124 08:05:53.935508   20382 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on docker
I0124 08:05:53.935583   20382 profile.go:147] Saving config to /Users/danield/.minikube/profiles/minikube/config.json ...
E0124 08:05:53.936168   20382 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
I0124 08:05:53.936174   20382 cache.go:208] Successfully downloaded all kic artifacts
I0124 08:05:53.936189   20382 start.go:313] acquiring machines lock for minikube: {Name:mk04264be43adb0b61089022ae9ebb8e555690a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0124 08:05:53.936242   20382 start.go:317] acquired machines lock for "minikube" in 26.875µs
I0124 08:05:53.936255   20382 start.go:93] Skipping create...Using existing machine configuration
I0124 08:05:53.936260   20382 fix.go:55] fixHost starting:
I0124 08:05:53.936508   20382 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Status}}
W0124 08:05:54.021228   20382 cli_runner.go:180] podman container inspect minikube --format={{.State.Status}} returned with exit code 125
I0124 08:05:54.021282   20382 fix.go:108] recreateIfNeeded on minikube: state= err=unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container "minikube"
I0124 08:05:54.021304   20382 fix.go:113] machineExists: true. err=unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container "minikube"
W0124 08:05:54.021316   20382 fix.go:134] unexpected machine state, will restart: unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container "minikube"
I0124 08:05:54.041360   20382 out.go:176] 🔄  Restarting existing podman container for "minikube" ...
🔄  Restarting existing podman container for "minikube" ...
I0124 08:05:54.041663   20382 cli_runner.go:133] Run: podman start minikube
W0124 08:05:54.125588   20382 cli_runner.go:180] podman start minikube returned with exit code 125
I0124 08:05:54.125757   20382 cli_runner.go:133] Run: podman inspect minikube
I0124 08:05:54.209495   20382 errors.go:84] Postmortem inspect ("podman inspect minikube"): -- stdout --
[
    {
        "Name": "minikube",
        "Driver": "local",
        "Mountpoint": "/var/home/core/.local/share/containers/storage/volumes/minikube/_data",
        "CreatedAt": "2022-01-21T17:17:27.952383322Z",
        "Labels": {
            "created_by.minikube.sigs.k8s.io": "true",
            "name.minikube.sigs.k8s.io": "minikube"
        },
        "Scope": "local",
        "Options": {}
    }
]

-- /stdout --
I0124 08:05:54.209692   20382 cli_runner.go:133] Run: podman logs --timestamps minikube
W0124 08:05:54.290757   20382 cli_runner.go:180] podman logs --timestamps minikube returned with exit code 125
W0124 08:05:54.290796   20382 errors.go:89] Failed to get postmortem logs. podman logs --timestamps minikube :podman logs --timestamps minikube: exit status 125
stdout:

stderr:
Error: channel "123" found, 0-3 supported: lost synchronization with multiplexed stream
I0124 08:05:54.290913   20382 cli_runner.go:133] Run: podman system info --format json
I0124 08:05:54.384618   20382 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:246255616 MemTotal:2048376832 OCIRuntime:{Name:crun Package:crun-1.4-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.4
commit: 3daded072ef008ef0840e8eccb0b52a7efbd165d
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.15.10-200.fc35.aarch64 Os:linux Rootless:false Uptime:46h 56m 22.46s (Approximately 1.92 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/var/home/core/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/home/core/.local/share/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/run/user/1000/containers VolumePath:/var/home/core/.local/share/containers/storage/volumes}}
I0124 08:05:54.384715   20382 errors.go:106] postmortem podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:246255616 MemTotal:2048376832 OCIRuntime:{Name:crun Package:crun-1.4-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.4
commit: 3daded072ef008ef0840e8eccb0b52a7efbd165d
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.15.10-200.fc35.aarch64 Os:linux Rootless:false Uptime:46h 56m 22.46s (Approximately 1.92 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/var/home/core/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/home/core/.local/share/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/run/user/1000/containers VolumePath:/var/home/core/.local/share/containers/storage/volumes}}
I0124 08:05:54.384835   20382 network_create.go:254] running [podman network inspect minikube] to gather additional debugging logs...
I0124 08:05:54.384883   20382 cli_runner.go:133] Run: podman network inspect minikube
I0124 08:05:54.467405   20382 network_create.go:259] output of [podman network inspect minikube]: -- stdout --
[
    {
        "args": {
            "podman_labels": {
                "created_by.minikube.sigs.k8s.io": "true"
            }
        },
        "cniVersion": "0.4.0",
        "name": "minikube",
        "plugins": [
            {
                "bridge": "cni-podman1",
                "hairpinMode": true,
                "ipMasq": true,
                "ipam": {
                    "ranges": [
                        [
                            {
                                "gateway": "192.*..*.*",
                                "subnet": "192.*.*.0/24"
                            }
                        ]
                    ],
                    "routes": [
                        {
                            "dst": "0.0.0.0/0"
                        }
                    ],
                    "type": "host-local"
                },
                "isGateway": true,
                "type": "bridge"
            },
            {
                "capabilities": {
                    "portMappings": true
                },
                "type": "portmap"
            },
            {
                "backend": "",
                "type": "firewall"
            },
            {
                "type": "tuning"
            },
            {
                "capabilities": {
                    "aliases": true
                },
                "domainName": "dns.podman",
                "type": "dnsname"
            },
            {
                "capabilities": {
                    "portMappings": true
                },
                "type": "podman-machine"
            }
        ]
    }
]

-- /stdout --
I0124 08:05:54.467622   20382 cli_runner.go:133] Run: podman system info --format json
I0124 08:05:54.564668   20382 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:245407744 MemTotal:2048376832 OCIRuntime:{Name:crun Package:crun-1.4-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.4
commit: 3daded072ef008ef0840e8eccb0b52a7efbd165d
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.15.10-200.fc35.aarch64 Os:linux Rootless:false Uptime:46h 56m 22.66s (Approximately 1.92 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/var/home/core/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/home/core/.local/share/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/run/user/1000/containers VolumePath:/var/home/core/.local/share/containers/storage/volumes}}
I0124 08:05:54.565294   20382 cli_runner.go:133] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
W0124 08:05:54.647004   20382 cli_runner.go:180] podman container inspect -f {{.NetworkSettings.IPAddress}} minikube returned with exit code 125
I0124 08:05:54.647232   20382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0124 08:05:54.647290   20382 cli_runner.go:133] Run: podman version --format {{.Version}}
I0124 08:05:54.749850   20382 cli_runner.go:133] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0124 08:05:54.830999   20382 cli_runner.go:180] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0124 08:05:54.831108   20382 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container "minikube"
I0124 08:05:55.109337   20382 cli_runner.go:133] Run: podman version --format {{.Version}}
I0124 08:05:55.274690   20382 cli_runner.go:133] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0124 08:05:55.361516   20382 cli_runner.go:180] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0124 08:05:55.361624   20382 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container "minikube"
I0124 08:05:55.903872   20382 cli_runner.go:133] Run: podman version --format {{.Version}}
I0124 08:05:56.079913   20382 cli_runner.go:133] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0124 08:05:56.163869   20382 cli_runner.go:180] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
W0124 08:05:56.163991   20382 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container "minikube"

W0124 08:05:56.164019   20382 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container "minikube"
I0124 08:05:56.164032   20382 fix.go:57] fixHost completed within 2.227768375s
I0124 08:05:56.164045   20382 start.go:80] releasing machines lock for "minikube", held for 2.227794s
W0124 08:05:56.164058   20382 start.go:566] error starting host: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container "minikube"
W0124 08:05:56.164186   20382 out.go:241] 🤦  StartHost failed, but will try again: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container "minikube"

🤦  StartHost failed, but will try again: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container "minikube"

running Mac OS 12.1 on apple (M1) silicon

@dkdndes
Copy link

dkdndes commented Jan 25, 2022

On Mac OS X 12.1., with an Intel CPU, it is the same. I got from the script the hint to delete the minikube image after it tried a different IP. I tried, and the output was the following:

$  minikube start --driver=docker

😄  minikube v1.25.1 on Darwin 12.1
❗  Deleting existing cluster minikube with different driver podman due to --delete-on-failure flag set by the user. 

💢  Exiting due to GUEST_DRIVER_MISMATCH: The existing "minikube" cluster was created using the "podman" driver, which is incompatible with requested "docker" driver.
💡  Suggestion: Delete the existing 'minikube' cluster using: 'minikube delete', or start the existing 'minikube' cluster using: 'minikube start --driver=podman'

$ minikube delete

🔥  Deleting "minikube" in podman ...
🔥  Removing /Users/peter/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.
❯ minikube start --driver=podman

😄  minikube v1.25.1 on Darwin 12.1
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E0125 21:50:07.498966   84689 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=1965MB) ...\ 2022/01/25 21:50:58 tcpproxy: for incoming conn 127.0.0.1:49216, error dialing "192.168.127.2:37237": connect tcp 192.168.127.2:37237: connection was refused
/ 2022/01/25 21:51:01 tcpproxy: for incoming conn 127.0.0.1:49217, error dialing "192.168.127.2:37237": connect tcp 192.168.127.2:37237: connection was refused
- 2022/01/25 21:51:04 tcpproxy: for incoming conn 127.0.0.1:49219, error dialing "192.168.127.2:37237": connect tcp 192.168.127.2:37237: connection was refused
- 
...

I switched off all firewall functionality before I tried podman vs. docker vs. footloose etc.

I had issues with Footloose related to systemd-v2 in docker-desktop 4.3; not sure if that relates in any way? From all options I am testing, just docker works for the moment.

@hobbytp
Copy link

hobbytp commented Feb 17, 2022

I try the following command in my WSL2, and it can work

$ minikube start --network-plugin=cni --cni=calico --driver=docker --base-image "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531"

😄 minikube v1.25.1 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.23.1 preload ...
🤷 docker "minikube" container is missing, will recreate.
🔥 Creating docker container (CPUs=2, Memory=3100MB) ...
🌐 Found network options:
▪ HTTP_PROXY=www-proxy.lmera.ericsson.se:8080
❗ You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
📘 Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
▪ HTTPS_PROXY=www-proxy.lmera.ericsson.se:8080
🐳 Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
▪ env HTTP_PROXY=www-proxy.lmera.ericsson.se:8080
▪ env HTTPS_PROXY=www-proxy.lmera.ericsson.se:8080
▪ kubelet.housekeeping-interval=5m
> kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
> kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
> kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
> kubectl: 44.43 MiB / 44.43 MiB [--------------] 100.00% 7.30 MiB p/s 6.3s
> kubeadm: 43.12 MiB / 43.12 MiB [---------------] 100.00% 2.10 MiB p/s 21s
> kubelet: 118.75 MiB / 118.75 MiB [-------------] 100.00% 4.63 MiB p/s 26s
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring Calico (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

NOTE: The base image string I got from https://minikube.sigs.k8s.io/docs/commands/start/

@imoonkin
Copy link

same here, running in WSL2. It works after I set the manual proxy configuration in docker desktop.

@Apocaly-pse
Copy link

use docker pull kicbase/stable:v0.0.32, and minikube start --vm-driver=docker --base-image="kicbase/stable:v0.0.32" --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --kubernetes-version=v1.23.8, it fixed this question.

@wucongquan
Copy link

If you are in China , try run:
minikube start --image-mirror-country='cn' --image-repository='auto'

@luisgrisolia
Copy link

luisgrisolia commented Aug 23, 2022

reconfiguration may fix the issue If you are not in cn

minikube delete --all --purge
minikube start

##10343 (comment)

@haiboself
Copy link

If you are in China , try run: minikube start --image-mirror-country='cn' --image-repository='auto'

I use mac, it doesn't work for me

@loveyandex
Copy link

rm .minikube folder

@kertzi
Copy link

kertzi commented Nov 3, 2022

Hello,
I'm hitting this kind of issue also.
I'm not in China but in Europe. I have latest 1.27.1 minikube.

I have tried minikube delete --all --purge
and then minikube start --driver=podman --container-runtime=containerd

My OS is Linux, Manjaro

It looks like some of layers failed to download
Failed, retrying in 1s ... (1/3). Error: copying system image from manifest list: reading blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0: Get \"https://storage.googleapis.com/artifacts.k8s-minikube.appspot.com/containers/images/sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0\": read tcp 192.168.68.115:55026->216.58.210.144:443: read: connection reset by peer"

Any help is appreciated, thank you!

Here is full log:

😄  minikube v1.27.1 on Arch 22.0.0
✨  Using the podman driver based on user configuration
📌  Using Podman driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.25.2 preload ...
    > preloaded-images-k8s-v18-v1...:  406.52 MiB / 406.52 MiB  100.00% 169.59 
E1103 09:40:26.985377  781710 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=7900MB) ...- 

🤦  StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.35 -d /var/lib: exit status 125
stdout:

stderr:
Trying to pull gcr.io/k8s-minikube/kicbase:v0.0.35...
Getting image source signatures
Copying blob sha256:b987c1044662ddf8ece45adc09ae43297caa03fbb0cd93807ec33446b5e4d699
Copying blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0
Copying blob sha256:8a5db1d791f6f64e5d26cc93b24f9334bee02a07d0c580e05235a4c095a6b228
Copying blob sha256:675920708c8bf10fbd02693dc8f43ee7dbe0a99cdfd55e06e6f1a8b43fd08e3f
Copying blob sha256:5929ee7263605a04298a055a110794e1c1c5d9ae6a1f3af3f35d1f5880757eeb
Copying blob sha256:145be87ba1e7d69f6b2a87da88bb0261eae002db08c503f8f5ebe3927e89dd48
Copying blob sha256:bf918f037955ed940fc47e8455a18c74fa0ff7bc1e5b08821b8ed878e3bb1940
Copying blob sha256:5bc974652c0f55c63f70c9e2e0e0d8f93026643f417590971e46c8ee01843f1b
Copying blob sha256:a8112bdd23a5e0a12c58ecbe50036899a4a05aa713e3543a377e30daf901ab12
Copying blob sha256:b43426408aeaa9e86b262f4d396a66f705a364608821e292bd511a61208b7134
Copying blob sha256:7ca3354f4ad095905f4c513c932c9ff49386caaafc4dfdc0142a7638aae7c8ff
Copying blob sha256:658a871584c730486c914141d2a4dceec9c5f589e2e4d130f2596bdfd4131419
Copying blob sha256:df38ef75fe038e9343adfa187ad7645d9c1e7965a338cfa71a143ee2ea091fbb
Copying blob sha256:8bf897b8cca689bab4155fbbd5f5056d52d00d1fb0db520919e4ea86e16ca38f
Copying blob sha256:5b451afb517d4ae60519ac43958a7ef4826a0bef035852e4898a083cc9c66d7d
Copying blob sha256:23b280ea6107386ed0bb1818c6496c4d0099067dc815e5d4d085ded90a0c2396
Copying blob sha256:73bcc9432e63bc8cc4da01288063f611cb8fcae657397cf5ec9b6e501892c6e2
Copying blob sha256:a63abca9d6b2ec07605bb34d69539e49b37f37d4e0301da2c192bd0addcd2e42
Copying blob sha256:e3fa3541a9c347099e7816f1dc89b67af773d14c83f62bafc47ef5b4309f7596
Copying blob sha256:221a234a2a2ce1fb364f7160b552f7c95c8eb6ca0746be4ba018b70a98839009
Copying blob sha256:eba2f175934d9241479b61f55aed01f369e66e0e6fe23a7840d426afbbd8f237
Copying blob sha256:2576370f94e3ca533df6df14138f87121e08eb7a1e3c7750c0dd4425d121ed3b
Copying blob sha256:66020bad4ace073dce0bf0459c4493e3506a401bbc6a4fce5fc1099ff733ba20
Copying blob sha256:1325375bc883a92c0ba95f661d2fea83471e41e864a496f14594bc9a90fee8fa
Copying blob sha256:ea3a5cb7dc8c93b46d89f267f0b6f1b2a903594018efee231305e14757296333
Copying blob sha256:81f0a1f7717461b3385999b0ac320e1eb2a2e87d94f104768d8598fa52912e9e
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:56b8a253d1c8c859d18bde29451688492524aeb442b9d9879c36de6348c0330f
Copying blob sha256:34dd8e262490eb764c01e4a1a5acfd49c383a868f77e44896b0ad587f2032648
Copying blob sha256:0910ade5e45a8649ffffa04a75f49793f4820d432ade2446c09331caa9450572
Copying blob sha256:c43bb84a7664e3247ed7e6adf837a54932222efe3a46671bb42110ccf754a4d4
Copying blob sha256:cc193acfd3b47ebf9d1f565ffc4e2f6b2d348209430e9fd99d7d08a7dad32cae
Copying blob sha256:9f3348a3008ad23e4d17e0febbf02b4ebfc731a25d5d35f1f594bf0958cf1c2a
Copying blob sha256:c97444bdadc69fd8211785d4e0083a0cfb00521ce6071019c3f85eaa3f9abab6
Copying blob sha256:0ff67ec6b52b7acdad02bafa776637e6af678aeb57c52eb545654d229e3abd01
Copying blob sha256:165fe55aaa5a19372b3f21bc66eae3dd599e875f910b5ab5feb4664b19251fc5
Copying blob sha256:85ef2ba128977a055820dc12310969b1b5f1f36dc4764b13f2d7337bdc58ed28
Copying blob sha256:7a9861c67178034da847abedb707b7d2c0abb8fa093645a19649d49a923534fa
Copying blob sha256:9e6492915687a2a8228a3fe45aeb09a050f89c1f1b9fac690f03e067831e2ca4
Copying blob sha256:c40ddbb9e28a6172dd1a6c49893e74edd154de912924cd44c08271549bc0cb8b
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:758013b640f0608cf7837128f34e3a5a8a181a57dce9ee7639a128c94ee33076
time="2022-11-03T09:42:10+02:00" level=warning msg="Failed, retrying in 1s ... (1/3). Error: copying system image from manifest list: reading blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0: Get \"https://storage.googleapis.com/artifacts.k8s-minikube.appspot.com/containers/images/sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0\": read tcp 192.168.68.115:55026->216.58.210.144:443: read: connection reset by peer"
Getting image source signatures
Copying blob sha256:b987c1044662ddf8ece45adc09ae43297caa03fbb0cd93807ec33446b5e4d699
Copying blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0
Copying blob sha256:675920708c8bf10fbd02693dc8f43ee7dbe0a99cdfd55e06e6f1a8b43fd08e3f
Copying blob sha256:8a5db1d791f6f64e5d26cc93b24f9334bee02a07d0c580e05235a4c095a6b228
Copying blob sha256:5929ee7263605a04298a055a110794e1c1c5d9ae6a1f3af3f35d1f5880757eeb
Copying blob sha256:145be87ba1e7d69f6b2a87da88bb0261eae002db08c503f8f5ebe3927e89dd48
Copying blob sha256:bf918f037955ed940fc47e8455a18c74fa0ff7bc1e5b08821b8ed878e3bb1940
Copying blob sha256:5bc974652c0f55c63f70c9e2e0e0d8f93026643f417590971e46c8ee01843f1b
Copying blob sha256:a8112bdd23a5e0a12c58ecbe50036899a4a05aa713e3543a377e30daf901ab12
Copying blob sha256:b43426408aeaa9e86b262f4d396a66f705a364608821e292bd511a61208b7134
Copying blob sha256:7ca3354f4ad095905f4c513c932c9ff49386caaafc4dfdc0142a7638aae7c8ff
Copying blob sha256:658a871584c730486c914141d2a4dceec9c5f589e2e4d130f2596bdfd4131419
Copying blob sha256:df38ef75fe038e9343adfa187ad7645d9c1e7965a338cfa71a143ee2ea091fbb
Copying blob sha256:8bf897b8cca689bab4155fbbd5f5056d52d00d1fb0db520919e4ea86e16ca38f
Copying blob sha256:5b451afb517d4ae60519ac43958a7ef4826a0bef035852e4898a083cc9c66d7d
Copying blob sha256:23b280ea6107386ed0bb1818c6496c4d0099067dc815e5d4d085ded90a0c2396
Copying blob sha256:73bcc9432e63bc8cc4da01288063f611cb8fcae657397cf5ec9b6e501892c6e2
Copying blob sha256:a63abca9d6b2ec07605bb34d69539e49b37f37d4e0301da2c192bd0addcd2e42
Copying blob sha256:e3fa3541a9c347099e7816f1dc89b67af773d14c83f62bafc47ef5b4309f7596
Copying blob sha256:221a234a2a2ce1fb364f7160b552f7c95c8eb6ca0746be4ba018b70a98839009
Copying blob sha256:eba2f175934d9241479b61f55aed01f369e66e0e6fe23a7840d426afbbd8f237
Copying blob sha256:2576370f94e3ca533df6df14138f87121e08eb7a1e3c7750c0dd4425d121ed3b
Copying blob sha256:66020bad4ace073dce0bf0459c4493e3506a401bbc6a4fce5fc1099ff733ba20
Copying blob sha256:1325375bc883a92c0ba95f661d2fea83471e41e864a496f14594bc9a90fee8fa
Copying blob sha256:ea3a5cb7dc8c93b46d89f267f0b6f1b2a903594018efee231305e14757296333
Copying blob sha256:81f0a1f7717461b3385999b0ac320e1eb2a2e87d94f104768d8598fa52912e9e
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:56b8a253d1c8c859d18bde29451688492524aeb442b9d9879c36de6348c0330f
Copying blob sha256:34dd8e262490eb764c01e4a1a5acfd49c383a868f77e44896b0ad587f2032648
Copying blob sha256:0910ade5e45a8649ffffa04a75f49793f4820d432ade2446c09331caa9450572
Copying blob sha256:c43bb84a7664e3247ed7e6adf837a54932222efe3a46671bb42110ccf754a4d4
Copying blob sha256:cc193acfd3b47ebf9d1f565ffc4e2f6b2d348209430e9fd99d7d08a7dad32cae
Copying blob sha256:9f3348a3008ad23e4d17e0febbf02b4ebfc731a25d5d35f1f594bf0958cf1c2a
Copying blob sha256:c97444bdadc69fd8211785d4e0083a0cfb00521ce6071019c3f85eaa3f9abab6
Copying blob sha256:0ff67ec6b52b7acdad02bafa776637e6af678aeb57c52eb545654d229e3abd01
Copying blob sha256:165fe55aaa5a19372b3f21bc66eae3dd599e875f910b5ab5feb4664b19251fc5
Copying blob sha256:85ef2ba128977a055820dc12310969b1b5f1f36dc4764b13f2d7337bdc58ed28
Copying blob sha256:7a9861c67178034da847abedb707b7d2c0abb8fa093645a19649d49a923534fa
Copying blob sha256:9e6492915687a2a8228a3fe45aeb09a050f89c1f1b9fac690f03e067831e2ca4
Copying blob sha256:c40ddbb9e28a6172dd1a6c49893e74edd154de912924cd44c08271549bc0cb8b
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:758013b640f0608cf7837128f34e3a5a8a181a57dce9ee7639a128c94ee33076
time="2022-11-03T09:43:34+02:00" level=warning msg="Failed, retrying in 1s ... (2/3). Error: copying system image from manifest list: reading blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0: Get \"https://storage.googleapis.com/artifacts.k8s-minikube.appspot.com/containers/images/sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0\": read tcp 192.168.68.115:47602->216.58.209.176:443: read: connection reset by peer"
time="2022-11-03T09:43:36+02:00" level=warning msg="Failed, retrying in 1s ... (3/3). Error: initializing source docker://gcr.io/k8s-minikube/kicbase:v0.0.35: Get \"https://gcr.io/v2/token?scope=repository%3Ak8s-minikube%2Fkicbase%3Apull&service=gcr.io\": read tcp 192.168.68.115:52794->64.233.165.82:443: read: connection reset by peer"
Getting image source signatures
Copying blob sha256:b987c1044662ddf8ece45adc09ae43297caa03fbb0cd93807ec33446b5e4d699
Copying blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0
Copying blob sha256:5929ee7263605a04298a055a110794e1c1c5d9ae6a1f3af3f35d1f5880757eeb
Copying blob sha256:675920708c8bf10fbd02693dc8f43ee7dbe0a99cdfd55e06e6f1a8b43fd08e3f
Copying blob sha256:145be87ba1e7d69f6b2a87da88bb0261eae002db08c503f8f5ebe3927e89dd48
Copying blob sha256:8a5db1d791f6f64e5d26cc93b24f9334bee02a07d0c580e05235a4c095a6b228
Copying blob sha256:bf918f037955ed940fc47e8455a18c74fa0ff7bc1e5b08821b8ed878e3bb1940
Copying blob sha256:5bc974652c0f55c63f70c9e2e0e0d8f93026643f417590971e46c8ee01843f1b
Copying blob sha256:a8112bdd23a5e0a12c58ecbe50036899a4a05aa713e3543a377e30daf901ab12
Copying blob sha256:b43426408aeaa9e86b262f4d396a66f705a364608821e292bd511a61208b7134
Copying blob sha256:7ca3354f4ad095905f4c513c932c9ff49386caaafc4dfdc0142a7638aae7c8ff
Copying blob sha256:658a871584c730486c914141d2a4dceec9c5f589e2e4d130f2596bdfd4131419
Copying blob sha256:df38ef75fe038e9343adfa187ad7645d9c1e7965a338cfa71a143ee2ea091fbb
Copying blob sha256:8bf897b8cca689bab4155fbbd5f5056d52d00d1fb0db520919e4ea86e16ca38f
Copying blob sha256:5b451afb517d4ae60519ac43958a7ef4826a0bef035852e4898a083cc9c66d7d
Copying blob sha256:23b280ea6107386ed0bb1818c6496c4d0099067dc815e5d4d085ded90a0c2396
Copying blob sha256:73bcc9432e63bc8cc4da01288063f611cb8fcae657397cf5ec9b6e501892c6e2
Copying blob sha256:a63abca9d6b2ec07605bb34d69539e49b37f37d4e0301da2c192bd0addcd2e42
Copying blob sha256:e3fa3541a9c347099e7816f1dc89b67af773d14c83f62bafc47ef5b4309f7596
Copying blob sha256:221a234a2a2ce1fb364f7160b552f7c95c8eb6ca0746be4ba018b70a98839009
Copying blob sha256:eba2f175934d9241479b61f55aed01f369e66e0e6fe23a7840d426afbbd8f237
Copying blob sha256:2576370f94e3ca533df6df14138f87121e08eb7a1e3c7750c0dd4425d121ed3b
Copying blob sha256:66020bad4ace073dce0bf0459c4493e3506a401bbc6a4fce5fc1099ff733ba20
Copying blob sha256:1325375bc883a92c0ba95f661d2fea83471e41e864a496f14594bc9a90fee8fa
Copying blob sha256:ea3a5cb7dc8c93b46d89f267f0b6f1b2a903594018efee231305e14757296333
Copying blob sha256:81f0a1f7717461b3385999b0ac320e1eb2a2e87d94f104768d8598fa52912e9e
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:56b8a253d1c8c859d18bde29451688492524aeb442b9d9879c36de6348c0330f
Copying blob sha256:34dd8e262490eb764c01e4a1a5acfd49c383a868f77e44896b0ad587f2032648
Copying blob sha256:0910ade5e45a8649ffffa04a75f49793f4820d432ade2446c09331caa9450572
Copying blob sha256:c43bb84a7664e3247ed7e6adf837a54932222efe3a46671bb42110ccf754a4d4
Copying blob sha256:cc193acfd3b47ebf9d1f565ffc4e2f6b2d348209430e9fd99d7d08a7dad32cae
Copying blob sha256:9f3348a3008ad23e4d17e0febbf02b4ebfc731a25d5d35f1f594bf0958cf1c2a
Copying blob sha256:c97444bdadc69fd8211785d4e0083a0cfb00521ce6071019c3f85eaa3f9abab6
Copying blob sha256:0ff67ec6b52b7acdad02bafa776637e6af678aeb57c52eb545654d229e3abd01
Copying blob sha256:165fe55aaa5a19372b3f21bc66eae3dd599e875f910b5ab5feb4664b19251fc5
Copying blob sha256:85ef2ba128977a055820dc12310969b1b5f1f36dc4764b13f2d7337bdc58ed28
Copying blob sha256:7a9861c67178034da847abedb707b7d2c0abb8fa093645a19649d49a923534fa
Copying blob sha256:9e6492915687a2a8228a3fe45aeb09a050f89c1f1b9fac690f03e067831e2ca4
Copying blob sha256:c40ddbb9e28a6172dd1a6c49893e74edd154de912924cd44c08271549bc0cb8b
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:758013b640f0608cf7837128f34e3a5a8a181a57dce9ee7639a128c94ee33076
Error: copying system image from manifest list: reading blob sha256:bf918f037955ed940fc47e8455a18c74fa0ff7bc1e5b08821b8ed878e3bb1940: Get "https://storage.googleapis.com/artifacts.k8s-minikube.appspot.com/containers/images/sha256:bf918f037955ed940fc47e8455a18c74fa0ff7bc1e5b08821b8ed878e3bb1940": read tcp 192.168.68.115:41178->216.58.209.208:443: read: connection reset by peer

🔄  Restarting existing podman container for "minikube" ...
😿  Failed to start podman container. Running "minikube delete" may fix it: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:

stderr:
Error: no container with name or ID "minikube" found: no such container


❌  Exiting due to GUEST_PROVISION: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:

stderr:
Error: no container with name or ID "minikube" found: no such container

@jsd2150
Copy link

jsd2150 commented Nov 26, 2022

Just ran into this same issue, I'm in Hawaii for the winter and was connected via a Spectrum v6 address.

Switched to my mobile hotspot, an ATT v6 address and everything just worked.

I have had some other sites seem to think I am connected from outside the US when using this spectrum connection here in Hawaii so perhaps there is some bad data trying to map the v6 address to a location?

@ly896291133
Copy link

I am in china , I had the same problem until add --registry-mirror , now it work well,if you are in china , 阿里云镜像服务 may help you

@shvamabps
Copy link

shvamabps commented Apr 5, 2023

Same issue
Ubuntu 22.04 , minikube: 1.30.1 , docker 23.0.3
I am in india.

image

@ongiant
Copy link

ongiant commented May 24, 2023

use docker pull kicbase/stable:v0.0.32, and minikube start --vm-driver=docker --base-image="kicbase/stable:v0.0.32" --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --kubernetes-version=v1.23.8, it fixed this question.

1st update:

It looks like the first command unnecessary. Because :

➜  ~ docker images                     
REPOSITORY                    TAG       IMAGE ID       CREATED       SIZE
kicbase/stable                v0.0.39   67a4b1138d2d   7 weeks ago   1.05GB
gcr.io/k8s-minikube/kicbase   v0.0.39   67a4b1138d2d   7 weeks ago   1.05GB

the original answer

Thanks this answer, I solved eventually but a little different.Note that I failed twice before, the cuase maybe my proxy connection is not stable, because I am in China. I just set proxy in ~/.zshrc.
First, I use docker pull kicbase/stable:v0.0.39 command, then I directly use minikube start --driver=docker.

  • My minikube version: v1.30.1
  • My docker version: 23.0.4

My console log print:

➜  ~ minikube start --driver=docker                                                                                                                 [73/1549]
😄  minikube v1.30.1 on Arch 22.1.1                                                                                                                          
✨  Using the docker driver based on user configuration                                                                                                      
📌  Using Docker driver with root privileges                                                                                                                 
❗  Local proxy ignored: not passing HTTP_PROXY=socks5://localhost:7891 to docker env.                                                                       
❗  Local proxy ignored: not passing HTTPS_PROXY=socks5://localhost:7891 to docker env.                                                                      
❗  Local proxy ignored: not passing HTTP_PROXY=socks5://localhost:7891 to docker env.                                                                       
❗  Local proxy ignored: not passing HTTPS_PROXY=socks5://localhost:7891 to docker env.                                                                      
👍  Starting control plane node minikube in cluster minikube                                                                                                 
🚜  Pulling base image ...                                                                                                                                   
💾  Downloading Kubernetes v1.26.3 preload ...                                                                                                               
    > preloaded-images-k8s-v18-v1...:  397.02 MiB / 397.02 MiB  100.00% 3.50 Mi                                                                              
    > gcr.io/k8s-minikube/kicbase...:  373.53 MiB / 373.53 MiB  100.00% 2.44 Mi                                                                              
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...                                                                                                    
❗  Local proxy ignored: not passing HTTP_PROXY=socks5://localhost:7891 to docker env.                                                                       
❗  Local proxy ignored: not passing HTTPS_PROXY=socks5://localhost:7891 to docker env.                                                                      
❗  Local proxy ignored: not passing HTTP_PROXY=socks5://localhost:7891 to docker env.                                                                       
❗  Local proxy ignored: not passing HTTPS_PROXY=socks5://localhost:7891 to docker env.                                                                      
🌐  Found network options:                                                                                                                                   
    ▪ HTTP_PROXY=socks5://localhost:7891                                                                                                                     
    ▪ HTTPS_PROXY=socks5://localhost:7891                                                                                                                    
    ▪ NO_PROXY=localhost,127.0.0.1,192.168.1.1,::1,*.local,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24                                      
    ▪ http_proxy=socks5://localhost:7891                                                                                                                     
    ▪ https_proxy=socks5://localhost:7891                                                                                                                    
    ▪ no_proxy=localhost,127.0.0.1,192.168.1.1,::1,*.local,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24                                      
❗  This container is having trouble accessing https://registry.k8s.io                                                                                       
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/                            
🐳  Preparing Kubernetes v1.26.3 on Docker 23.0.2 ...                                                                                                        
    ▪ env NO_PROXY=localhost,127.0.0.1,192.168.1.1,::1,*.local,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24                                  
    ▪ Generating certificates and keys ...                                                                                                                   
    ▪ Booting up control plane ...                                                                                                                           
    ▪ Configuring RBAC rules ...                                                                                                                             
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

And this:

➜  ~ kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
minikube   Ready    control-plane   37m   v1.26.3

In the end:

I still have a question. Specifically, I was reading the official documentation for minikube and came across the section about proxy. The document mentions that "If a HTTP proxy is required to access the internet, you may need to pass the proxy connection information to both minikube and Docker using environment variables" However, I'm not sure about how to go about passing the proxy connection information to Docker.. Is it like this: minikube start --driver=docker --docker-env HTTPS_PROXY=socks5://localhost:7891 --docker-env HTTP_PROXY=socks5://localhost:7891?

@LiuChiennan
Copy link

I am in china , I had the same problem until add --registry-mirror , now it work well,if you are in china , 阿里云镜像服务 may help you

  1. 06.11 the default aliyun mirror for china users "registry.aliyuncs.com/google_containers" does not work

@ZZQLSS12
Copy link

我在中国,在添加--registry-mirror之前遇到了同样的问题,现在它工作得很好,如果你在中国,阿里云镜像服务可能会帮助你

  1. 06.11 中国用户的默认阿里云镜像“registry.aliyuncs.com/google_containers”不起作用

Do you find new image in China now?

@ToviHe
Copy link

ToviHe commented Oct 20, 2023

I had this problem. The k8s related images were downloaded through other methods.
(base) ➜ data minikube start --kubernetes-version=v1.27.2 😄 Darwin 13.4.1 (arm64) 上的 minikube v1.31.2 ✨ 自动选择 docker 驱动。其他选项:parallels, vmware, ssh 📌 使用具有 root 权限的 Docker Desktop 驱动程序 👍 正在集群 minikube 中启动控制平面节点 minikube 🚜 正在拉取基础镜像 ... E1020 10:52:23.574335 78567 cache.go:190] Error downloading kic artifacts: failed to download kic base image or any fallback image 🔥 正在创建 docker container(CPUs=2,内存=7803MB)... ❗ The image 'gcr.io/k8s-minikube/storage-provisioner:v5' was not found; unable to add it to cache. 🐳 正在 Docker 24.0.4 中准备 Kubernetes v1.27.2… ❌ Unable to load cached images: loading cached images: stat /Users/tovi/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory ▪ 正在生成证书和密钥... ▪ 正在启动控制平面... ▪ 配置 RBAC 规则 ... 🔗 配置 bridge CNI (Container Networking Interface) ... 🔎 正在验证 Kubernetes 组件... ▪ 正在使用镜像 gcr.io/k8s-minikube/storage-provisioner:v5 🌟 启用插件: storage-provisioner, default-storageclass 🏄 完成!kubectl 现在已配置,默认使用"minikube"集群和"default"命名空

@chaseSpace
Copy link

chaseSpace commented Nov 8, 2023

继续跟帖,后来人注意!!!
阿里云仓库没有同步v1.23以后的k8s版本,所以指定阿里云仓库进行安装会报404(不要再指定阿里云作为镜像仓库了):

$ minikube start --force \
> --kubernetes-version=v1.25.14 \
> --image-mirror-country=cn \
> --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
* minikube v1.31.2 on Centos 7.9.2009
  - KUBECONFIG=/etc/kubernetes/admin.conf
! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
* Automatically selected the docker driver. Other choices: none, ssh
* The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
* If you are running minikube within a VM, consider using --driver=none:
*   https://minikube.sigs.k8s.io/docs/reference/drivers/none/

X The requested memory allocation of 1963MiB does not leave room for system overhead (total system memory: 1963MiB). You may face stability issues.
* Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1963mb'

* Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
* Using Docker driver with root privileges
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
! minikube was unable to download registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.40, but successfully downloaded docker.io/kicbase/stable:v0.0.40 as a fallback image
* Creating docker container (CPUs=2, Memory=1963MB) ...
* Preparing Kubernetes v1.25.14 on Docker 24.0.4 ...

X Exiting due to K8S_INSTALL_FAILED: Failed to update cluster: updating control plane: downloading binaries: downloading kubectl: download failed: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.14/bin/linux/amd64/kubectl?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.14/bin/linux/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.14/bin/linux/amd64/kubectl?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.14/bin/linux/amd64/kubectl.sha256 Dst:/root/.minikube/cache/linux/amd64/v1.25.14/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x3f9c8a8 0x3f9c8a8 0x3f9c8a8 0x3f9c8a8 0x3f9c8a8 0x3f9c8a8 0x3f9c8a8] Decompressors:map[bz2:0xc000431238 gz:0xc000431290 tar:0xc000431240 tar.bz2:0xc000431250 tar.gz:0xc000431260 tar.xz:0xc000431270 tar.zst:0xc000431280 tbz2:0xc000431250 tgz:0xc000431260 txz:0xc000431270 tzst:0xc000431280 xz:0xc000431298 zip:0xc0004312a0 zst:0xc0004312b0] Getters:map[file:0xc00107fea0 http:0xc001098500 https:0xc001098550] Dir:false ProgressListener:0x3f579a0 Insecure:false DisableSymlinks:false Options:[0x12d0880]}: invalid checksum: Error downloading checksum file: bad response code: 404

@karthick-dkk
Copy link

karthick-dkk commented Jan 6, 2024

Same here: 🤔

! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.42, but successfully downloaded docker.io/kicbase/stable:v0.0.42 as a fallback image

Logs:

I0106 02:04:24.278493 82771 out.go:296] Setting OutFile to fd 1 ...
I0106 02:04:24.278730 82771 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
I0106 02:04:24.278738 82771 out.go:309] Setting ErrFile to fd 2...
I0106 02:04:24.278744 82771 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
I0106 02:04:24.278964 82771 root.go:338] Updating PATH: /home/admintest/.minikube/bin
I0106 02:04:24.279371 82771 out.go:303] Setting JSON to false
I0106 02:04:24.282603 82771 start.go:128] hostinfo: {"hostname":"centos7.vm","uptime":17528,"bootTime":1704469337,"procs":145,"os":"linux","platform":"centos","platformFamily":"rhel","platformVersion":"7.9.2009","kernelVersion":"3.10.0-1160.102.1.el7.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"506a82f1-0c76-455c-ab6b-7059384d7baa"}
I0106 02:04:24.282692 82771 start.go:138] virtualization:
I0106 02:04:24.284001 82771 out.go:177] * minikube v1.32.0 on Centos 7.9.2009

  • minikube v1.32.0 on Centos 7.9.2009
    I0106 02:04:24.285405 82771 notify.go:220] Checking for updates...
    I0106 02:04:24.285861 82771 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
    I0106 02:04:24.285991 82771 driver.go:378] Setting default libvirt URI to qemu:///system
    I0106 02:04:24.315601 82771 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
    I0106 02:04:24.315908 82771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
    I0106 02:04:24.405850 82771 info.go:266] docker info: {ID:ee916740-f45b-4996-bf59-9a2e80a96f1f Containers:15 ContainersRunning:6 ContainersPaused:0 ContainersStopped:9 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:212 SystemTime:2024-01-06 02:04:24.391043646 +0530 IST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:3.10.0-1160.102.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:3 MemTotal:4931014656 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:centos7.vm Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID:n0lfio3fel7qj4qhmam7jrjlo NodeAddr:192.168.197.166 LocalNodeState:active ControlAvailable:true Error: RemoteManagers:[map[Addr:192.168.197.166:2377 NodeID:n0lfio3fel7qj4qhmam7jrjlo]]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:}}
    I0106 02:04:24.405963 82771 docker.go:295] overlay module found
    I0106 02:04:24.409442 82771 out.go:177] * Using the docker driver based on existing profile
  • Using the docker driver based on existing profile
    I0106 02:04:24.410558 82771 start.go:298] selected driver: docker
    I0106 02:04:24.410582 82771 start.go:902] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/admintest:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
    I0106 02:04:24.410918 82771 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
    I0106 02:04:24.411067 82771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
    I0106 02:04:24.507170 82771 info.go:266] docker info: {ID:ee916740-f45b-4996-bf59-9a2e80a96f1f Containers:15 ContainersRunning:6 ContainersPaused:0 ContainersStopped:9 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:212 SystemTime:2024-01-06 02:04:24.487986406 +0530 IST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:3.10.0-1160.102.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:3 MemTotal:4931014656 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:centos7.vm Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID:n0lfio3fel7qj4qhmam7jrjlo NodeAddr:192.168.197.166 LocalNodeState:active ControlAvailable:true Error: RemoteManagers:[map[Addr:192.168.197.166:2377 NodeID:n0lfio3fel7qj4qhmam7jrjlo]]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:}}
    I0106 02:04:24.509033 82771 cni.go:84] Creating CNI manager for ""
    I0106 02:04:24.509287 82771 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
    I0106 02:04:24.509482 82771 start_flags.go:323] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/admintest:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
    I0106 02:04:24.511230 82771 out.go:177] * Starting control plane node minikube in cluster minikube
  • Starting control plane node minikube in cluster minikube
    I0106 02:04:24.512425 82771 cache.go:121] Beginning downloading kic base image for docker with docker
    I0106 02:04:24.513756 82771 out.go:177] * Pulling base image ...
  • Pulling base image ...
    I0106 02:04:24.514763 82771 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
    I0106 02:04:24.514806 82771 preload.go:148] Found local preload: /home/admintest/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
    I0106 02:04:24.514795 82771 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
    I0106 02:04:24.514857 82771 cache.go:56] Caching tarball of preloaded images
    I0106 02:04:24.515095 82771 preload.go:174] Found /home/admintest/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
    I0106 02:04:24.515106 82771 cache.go:59] Finished verifying existence of preloaded tar for v1.28.3 on docker
    I0106 02:04:24.515336 82771 profile.go:148] Saving config to /home/admintest/.minikube/profiles/minikube/config.json ...
    I0106 02:04:24.542109 82771 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
    I0106 02:04:24.542207 82771 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
    I0106 02:04:24.542254 82771 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
    I0106 02:04:24.542399 82771 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
    I0106 02:04:24.542430 82771 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
    I0106 02:04:24.542437 82771 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
    I0106 02:04:24.542886 82771 cache.go:168] failed to download gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0, will try fallback image if available: tarball: unexpected EOF
    I0106 02:04:24.542928 82771 image.go:79] Checking for docker.io/kicbase/stable:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
    I0106 02:04:24.578667 82771 image.go:83] Found docker.io/kicbase/stable:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
    I0106 02:04:24.578736 82771 cache.go:144] docker.io/kicbase/stable:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
    W0106 02:04:24.578828 82771 out.go:239] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.42, but successfully downloaded docker.io/kicbase/stable:v0.0.42 as a fallback image
    ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.42, but successfully downloaded docker.io/kicbase/stable:v0.0.42 as a fallback image
    I0106 02:04:24.578886 82771 cache.go:194] Successfully downloaded all kic artifacts
    I0106 02:04:24.579319 82771 start.go:365] acquiring machines lock for minikube: {Name:mk60b2d01de5783c998e13cf94e2f7d65968672a Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0106 02:04:24.579485 82771 start.go:369] acquired machines lock for "minikube" in 40.858µs
    I0106 02:04:24.579509 82771 start.go:96] Skipping create...Using existing machine configuration
    I0106 02:04:24.579695 82771 fix.go:54] fixHost starting:
    I0106 02:04:24.580412 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
    I0106 02:04:24.621610 82771 fix.go:102] recreateIfNeeded on minikube: state= err=
    I0106 02:04:24.621637 82771 fix.go:107] machineExists: false. err=machine does not exist
    I0106 02:04:24.625495 82771 out.go:177] * docker "minikube" container is missing, will recreate.
  • docker "minikube" container is missing, will recreate.
    I0106 02:04:24.627400 82771 delete.go:124] DEMOLISHING minikube ...
    I0106 02:04:24.627499 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
    I0106 02:04:24.654131 82771 stop.go:79] host is in state
    I0106 02:04:24.655154 82771 main.go:141] libmachine: Stopping "minikube"...
    I0106 02:04:24.655242 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
    I0106 02:04:24.676948 82771 kic_runner.go:93] Run: systemctl --version
    I0106 02:04:24.676971 82771 kic_runner.go:114] Args: [docker exec --privileged minikube systemctl --version]
    I0106 02:04:24.703034 82771 kic_runner.go:93] Run: sudo service kubelet stop
    I0106 02:04:24.703079 82771 kic_runner.go:114] Args: [docker exec --privileged minikube sudo service kubelet stop]
    W0106 02:04:24.734943 82771 kic.go:453] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
    stdout:

stderr:
Error response from daemon: Container d7df58917287d2fe3ec6af8dc1feb4e9b79c2ae64a04b075838d6faafeca86ad is not running
I0106 02:04:24.735041 82771 kic_runner.go:93] Run: sudo service kubelet stop
I0106 02:04:24.735078 82771 kic_runner.go:114] Args: [docker exec --privileged minikube sudo service kubelet stop]
W0106 02:04:24.763676 82771 kic.go:455] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:

stderr:
Error response from daemon: Container d7df58917287d2fe3ec6af8dc1feb4e9b79c2ae64a04b075838d6faafeca86ad is not running
I0106 02:04:24.763759 82771 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.(kube-system|kubernetes-dashboard|storage-gluster|istio-operator) --format={{.ID}}
I0106 02:04:24.763770 82771 kic_runner.go:114] Args: [docker exec --privileged minikube docker ps -a --filter=name=k8s_.
(kube-system|kubernetes-dashboard|storage-gluster|istio-operator) --format={{.ID}}]
I0106 02:04:24.784993 82771 kic.go:466] unable list containers : docker: docker ps -a --filter=name=k8s_.*(kube-system|kubernetes-dashboard|storage-gluster|istio-operator) --format={{.ID}}: exit status 1
stdout:

stderr:
Error response from daemon: Container d7df58917287d2fe3ec6af8dc1feb4e9b79c2ae64a04b075838d6faafeca86ad is not running
I0106 02:04:24.785016 82771 kic.go:476] successfully stopped kubernetes!
I0106 02:04:24.785170 82771 kic_runner.go:93] Run: pgrep kube-apiserver
I0106 02:04:24.785181 82771 kic_runner.go:114] Args: [docker exec --privileged minikube pgrep kube-apiserver]
I0106 02:04:24.850422 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0106 02:04:27.875464 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0106 02:04:30.928879 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0106 02:04:33.982270 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0106 02:04:37.019185 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0106 02:04:40.086169 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0106 02:04:43.161857 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 5, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 5, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cause/firewall-or-proxy When firewalls or proxies seem to be interfering co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

Successfully merging a pull request may close this issue.