Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Registry addon incompatible with multiple nodes, image pulls fail on second node #11505

Closed
code-merc opened this issue May 25, 2021 · 6 comments
Labels
area/addons help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@code-merc
Copy link

code-merc commented May 25, 2021

When starting a minikube cluster with multiple nodes, image pulls fail on the second node.

Steps to reproduce the issue:

  1. minikube start --driver=docker --nodes=2 --cni=auto --addons=default-storageclass,registry,storage-provisioner --container-runtime=containerd
  2. Image pulls on the second minikube-m02 node will fail with
Failed to pull image "localhost:5000/example-html-image:tilt-586b9fdc43eaa4d9": rpc error: code = Unknown desc = failed to pull and unpack image "localhost:5000/example-html-image:tilt-586b9fdc43eaa4d9": failed to resolve reference "localhost:5000/example-html-image:tilt-586b9fdc43eaa4d9": failed to do request: Head http://localhost:5000/v2/example-html-image/manifests/tilt-586b9fdc43eaa4d9: dial tcp 127.0.0.1:5000: connect: connection refused`

Full output of minikube logs command:

minikube.log

Full output of failed command:

minikube start --driver=docker --nodes=2 --cni=auto --addons=default-storageclass,registry,storage-provisioner --container-runtime=containerd --alsologtostderr
I0525 13:19:50.831160  158763 out.go:291] Setting OutFile to fd 1 ...
I0525 13:19:50.831396  158763 out.go:343] isatty.IsTerminal(1) = true
I0525 13:19:50.831410  158763 out.go:304] Setting ErrFile to fd 2...
I0525 13:19:50.831423  158763 out.go:343] isatty.IsTerminal(2) = true
I0525 13:19:50.831607  158763 root.go:316] Updating PATH: /home/alex/.minikube/bin
I0525 13:19:50.832017  158763 out.go:298] Setting JSON to false
I0525 13:19:50.855675  158763 start.go:108] hostinfo: {"hostname":"alex-dev-box","uptime":3309,"bootTime":1621959881,"procs":712,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-73-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"41dc7fb2-4ca7-4e38-8ece-22d5c31c763a"}
I0525 13:19:50.855822  158763 start.go:118] virtualization: kvm host
I0525 13:19:50.864791  158763 out.go:170] 😄  minikube v1.20.0 on Ubuntu 20.04
😄  minikube v1.20.0 on Ubuntu 20.04
I0525 13:19:50.864967  158763 notify.go:169] Checking for updates...
I0525 13:19:50.865089  158763 driver.go:322] Setting default libvirt URI to qemu:///system
I0525 13:19:50.913738  158763 docker.go:119] docker version: linux-19.03.8
I0525 13:19:50.913863  158763 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0525 13:19:50.990637  158763 info.go:261] docker info: {ID:W64L:QT2P:HUGE:4A6S:V3SF:2Z7J:LTJX:ED4D:K3W4:KQF5:BKN2:77FO Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:94 Driver:overlay2 DriverStatus:[[Backing Filesystem <unknown>] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:41 SystemTime:2021-05-25 13:19:50.946353006 -0400 EDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-73-generic OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:67354693632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:alex-dev-box Labels:[] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0525 13:19:50.990770  158763 docker.go:225] overlay module found
I0525 13:19:50.993528  158763 out.go:170] ✨  Using the docker driver based on user configuration
✨  Using the docker driver based on user configuration
I0525 13:19:50.993548  158763 start.go:276] selected driver: docker
I0525 13:19:50.993561  158763 start.go:718] validating driver "docker" against <nil>
I0525 13:19:50.993583  158763 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0525 13:19:50.993896  158763 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0525 13:19:51.077776  158763 info.go:261] docker info: {ID:W64L:QT2P:HUGE:4A6S:V3SF:2Z7J:LTJX:ED4D:K3W4:KQF5:BKN2:77FO Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:94 Driver:overlay2 DriverStatus:[[Backing Filesystem <unknown>] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:41 SystemTime:2021-05-25 13:19:51.032793687 -0400 EDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-73-generic OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:67354693632 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:alex-dev-box Labels:[] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0525 13:19:51.077924  158763 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
I0525 13:19:51.079557  158763 start_flags.go:715] Wait components to verify : map[apiserver:true system_pods:true]
I0525 13:19:51.079586  158763 cni.go:93] Creating CNI manager for "auto"
I0525 13:19:51.079602  158763 cni.go:154] 0 nodes found, recommending kindnet
I0525 13:19:51.079617  158763 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0525 13:19:51.079630  158763 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0525 13:19:51.079644  158763 start_flags.go:268] Found "CNI" CNI - setting NetworkPlugin=cni
I0525 13:19:51.079658  158763 start_flags.go:273] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8192 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:auto NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
I0525 13:19:51.082506  158763 out.go:170] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0525 13:19:51.082550  158763 cache.go:111] Beginning downloading kic base image for docker with containerd
W0525 13:19:51.082563  158763 out.go:424] no arguments passed for "🚜  Pulling base image ...\n" - returning raw string
W0525 13:19:51.082589  158763 out.go:424] no arguments passed for "🚜  Pulling base image ...\n" - returning raw string
I0525 13:19:51.087101  158763 out.go:170] 🚜  Pulling base image ...
🚜  Pulling base image ...
I0525 13:19:51.087140  158763 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
I0525 13:19:51.087172  158763 preload.go:106] Found local preload: /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
I0525 13:19:51.087182  158763 cache.go:54] Caching tarball of preloaded images
I0525 13:19:51.087200  158763 preload.go:132] Found /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0525 13:19:51.087211  158763 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on containerd
I0525 13:19:51.087235  158763 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory
I0525 13:19:51.087275  158763 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull
I0525 13:19:51.087288  158763 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull
I0525 13:19:51.087326  158763 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon
I0525 13:19:51.087573  158763 profile.go:148] Saving config to /home/alex/.minikube/profiles/minikube/config.json ...
I0525 13:19:51.087611  158763 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/minikube/config.json: {Name:mk51d3b3c9fde5f799d316f946b611e231963075 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0525 13:19:51.153439  158763 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull
I0525 13:19:51.153472  158763 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull
I0525 13:19:51.153496  158763 cache.go:194] Successfully downloaded all kic artifacts
I0525 13:19:51.153535  158763 start.go:313] acquiring machines lock for minikube: {Name:mk839886e31162d2f23fb5ae00cc5c0d523139ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0525 13:19:51.153652  158763 start.go:317] acquired machines lock for "minikube" in 87.99µs
I0525 13:19:51.153678  158763 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8192 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:auto NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0525 13:19:51.153780  158763 start.go:126] createHost starting for "" (driver="docker")
I0525 13:19:51.156551  158763 out.go:197] 🔥  Creating docker container (CPUs=4, Memory=8192MB) ...
🔥  Creating docker container (CPUs=4, Memory=8192MB) ...| I0525 13:19:51.156794  158763 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0525 13:19:51.156823  158763 client.go:168] LocalClient.Create starting
I0525 13:19:51.156895  158763 main.go:128] libmachine: Reading certificate data from /home/alex/.minikube/certs/ca.pem
I0525 13:19:51.156931  158763 main.go:128] libmachine: Decoding PEM data...
I0525 13:19:51.156956  158763 main.go:128] libmachine: Parsing certificate...
I0525 13:19:51.157089  158763 main.go:128] libmachine: Reading certificate data from /home/alex/.minikube/certs/cert.pem
I0525 13:19:51.157116  158763 main.go:128] libmachine: Decoding PEM data...
I0525 13:19:51.157153  158763 main.go:128] libmachine: Parsing certificate...
I0525 13:19:51.157580  158763 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0525 13:19:51.200461  158763 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0525 13:19:51.200570  158763 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs...
I0525 13:19:51.200596  158763 cli_runner.go:115] Run: docker network inspect minikube
W0525 13:19:51.243071  158763 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0525 13:19:51.243108  158763 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0525 13:19:51.243130  158763 network_create.go:254] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I0525 13:19:51.243216  158763 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
/ I0525 13:19:51.285210  158763 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000454880] misses:0}
I0525 13:19:51.285271  158763 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0525 13:19:51.285300  158763 network_create.go:100] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0525 13:19:51.285379  158763 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
- I0525 13:19:51.373881  158763 network_create.go:84] docker network minikube 192.168.49.0/24 created
I0525 13:19:51.373922  158763 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0525 13:19:51.374063  158763 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0525 13:19:51.418124  158763 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0525 13:19:51.457062  158763 oci.go:102] Successfully created a docker volume minikube
I0525 13:19:51.457169  158763 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib
- I0525 13:19:52.214122  158763 oci.go:106] Successfully prepared a docker volume minikube
W0525 13:19:52.214181  158763 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0525 13:19:52.214195  158763 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0525 13:19:52.214193  158763 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
I0525 13:19:52.214251  158763 preload.go:106] Found local preload: /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
I0525 13:19:52.214278  158763 kic.go:179] Starting extracting preloaded images to volume ...
I0525 13:19:52.214283  158763 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0525 13:19:52.214387  158763 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir
\ I0525 13:19:52.308452  158763 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=4 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e
/ I0525 13:19:52.882508  158763 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0525 13:19:52.925905  158763 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
- I0525 13:19:52.966973  158763 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
\ I0525 13:19:53.074459  158763 oci.go:278] the created container "minikube" has a running status.
I0525 13:19:53.074506  158763 kic.go:210] Creating ssh key for kic: /home/alex/.minikube/machines/minikube/id_rsa...
/ I0525 13:19:53.352122  158763 kic_runner.go:188] docker (temp): /home/alex/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
\ I0525 13:19:53.952825  158763 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
| I0525 13:19:53.994992  158763 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0525 13:19:53.995019  158763 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
\ I0525 13:19:56.683972  158763 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (4.469530674s)
I0525 13:19:56.684010  158763 kic.go:188] duration metric: took 4.469729 seconds to extract preloaded images to volume
I0525 13:19:56.684139  158763 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0525 13:19:56.725540  158763 machine.go:88] provisioning docker machine ...
I0525 13:19:56.725588  158763 ubuntu.go:169] provisioning hostname "minikube"
I0525 13:19:56.725678  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0525 13:19:56.762312  158763 main.go:128] libmachine: Using SSH client type: native
I0525 13:19:56.762551  158763 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 32802 <nil> <nil>}
I0525 13:19:56.762575  158763 main.go:128] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
/ I0525 13:19:56.904696  158763 main.go:128] libmachine: SSH cmd err, output: <nil>: minikube

I0525 13:19:56.904804  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0525 13:19:56.946676  158763 main.go:128] libmachine: Using SSH client type: native
I0525 13:19:56.946872  158763 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 32802 <nil> <nil>}
I0525 13:19:56.946902  158763 main.go:128] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
- I0525 13:19:57.072425  158763 main.go:128] libmachine: SSH cmd err, output: <nil>: 
I0525 13:19:57.072465  158763 ubuntu.go:175] set auth options {CertDir:/home/alex/.minikube CaCertPath:/home/alex/.minikube/certs/ca.pem CaPrivateKeyPath:/home/alex/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/alex/.minikube/machines/server.pem ServerKeyPath:/home/alex/.minikube/machines/server-key.pem ClientKeyPath:/home/alex/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/alex/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/alex/.minikube}
I0525 13:19:57.072496  158763 ubuntu.go:177] setting up certificates
I0525 13:19:57.072511  158763 provision.go:83] configureAuth start
I0525 13:19:57.072595  158763 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
\ I0525 13:19:57.117714  158763 provision.go:137] copyHostCerts
I0525 13:19:57.117773  158763 exec_runner.go:145] found /home/alex/.minikube/ca.pem, removing ...
I0525 13:19:57.117784  158763 exec_runner.go:190] rm: /home/alex/.minikube/ca.pem
I0525 13:19:57.117853  158763 exec_runner.go:152] cp: /home/alex/.minikube/certs/ca.pem --> /home/alex/.minikube/ca.pem (1029 bytes)
I0525 13:19:57.117924  158763 exec_runner.go:145] found /home/alex/.minikube/cert.pem, removing ...
I0525 13:19:57.117933  158763 exec_runner.go:190] rm: /home/alex/.minikube/cert.pem
I0525 13:19:57.117960  158763 exec_runner.go:152] cp: /home/alex/.minikube/certs/cert.pem --> /home/alex/.minikube/cert.pem (1070 bytes)
I0525 13:19:57.118009  158763 exec_runner.go:145] found /home/alex/.minikube/key.pem, removing ...
I0525 13:19:57.118017  158763 exec_runner.go:190] rm: /home/alex/.minikube/key.pem
I0525 13:19:57.118045  158763 exec_runner.go:152] cp: /home/alex/.minikube/certs/key.pem --> /home/alex/.minikube/key.pem (1679 bytes)
I0525 13:19:57.118084  158763 provision.go:111] generating server cert: /home/alex/.minikube/machines/server.pem ca-key=/home/alex/.minikube/certs/ca.pem private-key=/home/alex/.minikube/certs/ca-key.pem org=alex.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
/ I0525 13:19:57.362003  158763 provision.go:165] copyRemoteCerts
I0525 13:19:57.362077  158763 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0525 13:19:57.362133  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
- I0525 13:19:57.406387  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/alex/.minikube/machines/minikube/id_rsa Username:docker}
\ I0525 13:19:57.496764  158763 ssh_runner.go:316] scp /home/alex/.minikube/machines/server.pem --> /etc/docker/server.pem (1147 bytes)
I0525 13:19:57.516737  158763 ssh_runner.go:316] scp /home/alex/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0525 13:19:57.535414  158763 ssh_runner.go:316] scp /home/alex/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1029 bytes)
I0525 13:19:57.556772  158763 provision.go:86] duration metric: configureAuth took 484.242883ms
I0525 13:19:57.556806  158763 ubuntu.go:193] setting minikube options for container-runtime
I0525 13:19:57.557018  158763 machine.go:91] provisioned docker machine in 831.45039ms
I0525 13:19:57.557036  158763 client.go:171] LocalClient.Create took 6.400201422s
I0525 13:19:57.557066  158763 start.go:168] duration metric: libmachine.API.Create for "minikube" took 6.400263833s
I0525 13:19:57.557083  158763 start.go:267] post-start starting for "minikube" (driver="docker")
I0525 13:19:57.557094  158763 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0525 13:19:57.557172  158763 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0525 13:19:57.557241  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| I0525 13:19:57.601944  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/alex/.minikube/machines/minikube/id_rsa Username:docker}
/ I0525 13:19:57.693384  158763 ssh_runner.go:149] Run: cat /etc/os-release
I0525 13:19:57.696757  158763 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0525 13:19:57.696790  158763 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0525 13:19:57.696811  158763 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0525 13:19:57.696823  158763 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0525 13:19:57.696838  158763 filesync.go:118] Scanning /home/alex/.minikube/addons for local assets ...
I0525 13:19:57.696893  158763 filesync.go:118] Scanning /home/alex/.minikube/files for local assets ...
I0525 13:19:57.696922  158763 start.go:270] post-start completed in 139.828153ms
I0525 13:19:57.697274  158763 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0525 13:19:57.739822  158763 profile.go:148] Saving config to /home/alex/.minikube/profiles/minikube/config.json ...
I0525 13:19:57.740144  158763 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0525 13:19:57.740210  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
- I0525 13:19:57.785941  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/alex/.minikube/machines/minikube/id_rsa Username:docker}
I0525 13:19:57.872568  158763 start.go:129] duration metric: createHost completed in 6.71876653s
I0525 13:19:57.872609  158763 start.go:80] releasing machines lock for "minikube", held for 6.71894005s
I0525 13:19:57.872726  158763 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
\ I0525 13:19:57.912478  158763 ssh_runner.go:149] Run: systemctl --version
I0525 13:19:57.912509  158763 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0525 13:19:57.912568  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0525 13:19:57.912588  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0525 13:19:57.952073  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/alex/.minikube/machines/minikube/id_rsa Username:docker}
I0525 13:19:57.954672  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/alex/.minikube/machines/minikube/id_rsa Username:docker}
/ I0525 13:19:58.137228  158763 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0525 13:19:58.149955  158763 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0525 13:19:58.161051  158763 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
I0525 13:19:58.181533  158763 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
- I0525 13:19:58.192760  158763 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
I0525 13:19:58.281957  158763 ssh_runner.go:149] Run: sudo systemctl mask docker.service
\ I0525 13:19:58.351882  158763 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0525 13:19:58.361707  158763 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0525 13:19:58.375023  158763 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5tayIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLmxpbnV4XQogICAgc2hpbSA9ICJjb250YWluZXJkLXNoaW0iCiAgICBydW50aW1lID0gInJ1bmMiCiAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgbm9fc2hpbSA9IGZhbHNlCiAgICBzaGltX2RlYnVnID0gZmFsc2UKICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
| I0525 13:19:58.390171  158763 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0525 13:19:58.397875  158763 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0525 13:19:58.404658  158763 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0525 13:19:58.474814  158763 ssh_runner.go:149] Run: sudo systemctl restart containerd
/ I0525 13:19:58.552381  158763 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock
I0525 13:19:58.552475  158763 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0525 13:19:58.555757  158763 start.go:393] Will wait 60s for crictl version
I0525 13:19:58.555871  158763 ssh_runner.go:149] Run: sudo crictl version
I0525 13:19:58.580498  158763 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:

stderr:
time="2021-05-25T17:19:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
| I0525 13:20:09.627237  158763 ssh_runner.go:149] Run: sudo crictl version
I0525 13:20:09.653785  158763 start.go:402] Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  1.4.4
RuntimeApiVersion:  v1alpha2
I0525 13:20:09.653872  158763 ssh_runner.go:149] Run: containerd --version
I0525 13:20:09.678492  158763 out.go:170] 📦  Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...

📦  Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
I0525 13:20:09.678640  158763 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0525 13:20:09.720487  158763 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
I0525 13:20:09.724422  158763 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0525 13:20:09.735630  158763 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
I0525 13:20:09.735665  158763 preload.go:106] Found local preload: /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
I0525 13:20:09.735730  158763 ssh_runner.go:149] Run: sudo crictl images --output json
I0525 13:20:09.762359  158763 containerd.go:571] all images are preloaded for containerd runtime.
I0525 13:20:09.762388  158763 containerd.go:481] Images already preloaded, skipping extraction
I0525 13:20:09.762461  158763 ssh_runner.go:149] Run: sudo crictl images --output json
I0525 13:20:09.792147  158763 containerd.go:571] all images are preloaded for containerd runtime.
I0525 13:20:09.792176  158763 cache_images.go:74] Images are preloaded, skipping loading
I0525 13:20:09.792253  158763 ssh_runner.go:149] Run: sudo crictl info
I0525 13:20:09.820537  158763 cni.go:93] Creating CNI manager for "auto"
I0525 13:20:09.820569  158763 cni.go:154] 1 nodes found, recommending kindnet
I0525 13:20:09.820594  158763 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0525 13:20:09.820621  158763 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0525 13:20:09.820776  158763 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249

I0525 13:20:09.820994  158763 kubeadm.go:901] kubelet [Unit]
Wants=containerd.service

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=minikube --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:auto NodeIP: NodePort:8443 NodeName:}
I0525 13:20:09.821097  158763 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
I0525 13:20:09.829451  158763 binaries.go:44] Found k8s binaries, skipping transfer
I0525 13:20:09.829558  158763 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0525 13:20:09.835395  158763 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (553 bytes)
I0525 13:20:09.850032  158763 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0525 13:20:09.861226  158763 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1847 bytes)
I0525 13:20:09.875349  158763 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
I0525 13:20:09.878399  158763 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0525 13:20:09.888688  158763 certs.go:52] Setting up /home/alex/.minikube/profiles/minikube for IP: 192.168.49.2
I0525 13:20:09.888743  158763 certs.go:171] skipping minikubeCA CA generation: /home/alex/.minikube/ca.key
I0525 13:20:09.888763  158763 certs.go:171] skipping proxyClientCA CA generation: /home/alex/.minikube/proxy-client-ca.key
I0525 13:20:09.888820  158763 certs.go:286] generating minikube-user signed cert: /home/alex/.minikube/profiles/minikube/client.key
I0525 13:20:09.888831  158763 crypto.go:69] Generating cert /home/alex/.minikube/profiles/minikube/client.crt with IP's: []
I0525 13:20:10.202915  158763 crypto.go:157] Writing cert to /home/alex/.minikube/profiles/minikube/client.crt ...
I0525 13:20:10.202946  158763 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/minikube/client.crt: {Name:mk4c120f00878df4c97f5ef09a859c259311ae61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0525 13:20:10.203104  158763 crypto.go:165] Writing key to /home/alex/.minikube/profiles/minikube/client.key ...
I0525 13:20:10.203114  158763 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/minikube/client.key: {Name:mkb8cd6190eb724bb6710baa1308dc9b635c440a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0525 13:20:10.203177  158763 certs.go:286] generating minikube signed cert: /home/alex/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0525 13:20:10.203185  158763 crypto.go:69] Generating cert /home/alex/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0525 13:20:10.687857  158763 crypto.go:157] Writing cert to /home/alex/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0525 13:20:10.687890  158763 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkcbf85be742577617feb4f3f5902c5ee58ac9cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0525 13:20:10.688038  158763 crypto.go:165] Writing key to /home/alex/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0525 13:20:10.688049  158763 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkde08b4303ed831bbfa4e0f466bc2fb59d747b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0525 13:20:10.688110  158763 certs.go:297] copying /home/alex/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/alex/.minikube/profiles/minikube/apiserver.crt
I0525 13:20:10.688177  158763 certs.go:301] copying /home/alex/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/alex/.minikube/profiles/minikube/apiserver.key
I0525 13:20:10.688217  158763 certs.go:286] generating aggregator signed cert: /home/alex/.minikube/profiles/minikube/proxy-client.key
I0525 13:20:10.688225  158763 crypto.go:69] Generating cert /home/alex/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0525 13:20:10.763938  158763 crypto.go:157] Writing cert to /home/alex/.minikube/profiles/minikube/proxy-client.crt ...
I0525 13:20:10.763965  158763 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/minikube/proxy-client.crt: {Name:mkfdf9c813ba302715ec41bc17c003b56a15078b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0525 13:20:10.764102  158763 crypto.go:165] Writing key to /home/alex/.minikube/profiles/minikube/proxy-client.key ...
I0525 13:20:10.764119  158763 lock.go:36] WriteFile acquiring /home/alex/.minikube/profiles/minikube/proxy-client.key: {Name:mkd7678162460e9424cbc2bd5205760361dfb45c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0525 13:20:10.764247  158763 certs.go:361] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/ca-key.pem (1679 bytes)
I0525 13:20:10.764277  158763 certs.go:361] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/ca.pem (1029 bytes)
I0525 13:20:10.764299  158763 certs.go:361] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/cert.pem (1070 bytes)
I0525 13:20:10.764333  158763 certs.go:361] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/key.pem (1679 bytes)
I0525 13:20:10.765127  158763 ssh_runner.go:316] scp /home/alex/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0525 13:20:10.786266  158763 ssh_runner.go:316] scp /home/alex/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0525 13:20:10.812965  158763 ssh_runner.go:316] scp /home/alex/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0525 13:20:10.833236  158763 ssh_runner.go:316] scp /home/alex/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0525 13:20:10.853045  158763 ssh_runner.go:316] scp /home/alex/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0525 13:20:10.874384  158763 ssh_runner.go:316] scp /home/alex/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0525 13:20:10.895841  158763 ssh_runner.go:316] scp /home/alex/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0525 13:20:10.919501  158763 ssh_runner.go:316] scp /home/alex/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0525 13:20:10.943213  158763 ssh_runner.go:316] scp /home/alex/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0525 13:20:10.967182  158763 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0525 13:20:10.979868  158763 ssh_runner.go:149] Run: openssl version
I0525 13:20:10.985853  158763 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0525 13:20:10.995769  158763 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0525 13:20:10.998515  158763 certs.go:402] hashing: -rw-r--r-- 1 root root 1066 May 11  2020 /usr/share/ca-certificates/minikubeCA.pem
I0525 13:20:10.998609  158763 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0525 13:20:11.005208  158763 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0525 13:20:11.015344  158763 kubeadm.go:381] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8192 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:auto NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
I0525 13:20:11.015437  158763 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0525 13:20:11.015514  158763 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0525 13:20:11.043186  158763 cri.go:76] found id: ""
I0525 13:20:11.043295  158763 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0525 13:20:11.050034  158763 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0525 13:20:11.057718  158763 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0525 13:20:11.057798  158763 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0525 13:20:11.064531  158763 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0525 13:20:11.064576  158763 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
W0525 13:20:11.392739  158763 out.go:424] no arguments passed for "    ▪ Generating certificates and keys ..." - returning raw string
W0525 13:20:11.392773  158763 out.go:424] no arguments passed for "    ▪ Generating certificates and keys ..." - returning raw string
I0525 13:20:11.395660  158763 out.go:197]     ▪ Generating certificates and keys ...
    ▪ Generating certificates and keys ...\ W0525 13:20:13.758568  158763 out.go:424] no arguments passed for "    ▪ Booting up control plane ..." - returning raw string
W0525 13:20:13.758623  158763 out.go:424] no arguments passed for "    ▪ Booting up control plane ..." - returning raw string
I0525 13:20:13.763173  158763 out.go:197]     ▪ Booting up control plane ...

    ▪ Booting up control plane ...| W0525 13:20:32.320071  158763 out.go:424] no arguments passed for "    ▪ Configuring RBAC rules ..." - returning raw string
W0525 13:20:32.320130  158763 out.go:424] no arguments passed for "    ▪ Configuring RBAC rules ..." - returning raw string
I0525 13:20:32.324180  158763 out.go:197]     ▪ Configuring RBAC rules ...

    ▪ Configuring RBAC rules ...| I0525 13:20:32.755513  158763 cni.go:93] Creating CNI manager for "auto"
I0525 13:20:32.755542  158763 cni.go:154] 1 nodes found, recommending kindnet
I0525 13:20:32.758183  158763 out.go:170] 🔗  Configuring CNI (Container Networking Interface) ...

🔗  Configuring CNI (Container Networking Interface) ...
I0525 13:20:32.758256  158763 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
I0525 13:20:32.761846  158763 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
I0525 13:20:32.761864  158763 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0525 13:20:32.775703  158763 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0525 13:20:33.244305  158763 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0525 13:20:33.244384  158763 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_05_25T13_20_33_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0525 13:20:33.244388  158763 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0525 13:20:33.324978  158763 ops.go:34] apiserver oom_adj: -16
I0525 13:20:33.325061  158763 kubeadm.go:977] duration metric: took 80.761164ms to wait for elevateKubeSystemPrivileges.
I0525 13:20:33.325098  158763 kubeadm.go:383] StartCluster complete in 22.309763106s
I0525 13:20:33.325125  158763 settings.go:142] acquiring lock: {Name:mk627fa28a1976656e27a48af7f606caf0283542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0525 13:20:33.325226  158763 settings.go:150] Updating kubeconfig:  /home/alex/.kube/config
I0525 13:20:33.327487  158763 lock.go:36] WriteFile acquiring /home/alex/.kube/config: {Name:mka8437642e3e79f288f89b7a0971396de857b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0525 13:20:33.847663  158763 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0525 13:20:33.847717  158763 start.go:201] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
W0525 13:20:33.847759  158763 out.go:424] no arguments passed for "🔎  Verifying Kubernetes components...\n" - returning raw string
W0525 13:20:33.847782  158763 out.go:424] no arguments passed for "🔎  Verifying Kubernetes components...\n" - returning raw string
I0525 13:20:33.853616  158763 out.go:170] 🔎  Verifying Kubernetes components...
I0525 13:20:33.847803  158763 addons.go:328] enableAddons start: toEnable=map[], additional=[default-storageclass registry storage-provisioner]
🔎  Verifying Kubernetes components...
I0525 13:20:33.853687  158763 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I0525 13:20:33.853705  158763 addons.go:131] Setting addon storage-provisioner=true in "minikube"
I0525 13:20:33.853709  158763 addons.go:55] Setting default-storageclass=true in profile "minikube"
I0525 13:20:33.853719  158763 addons.go:55] Setting registry=true in profile "minikube"
I0525 13:20:33.853737  158763 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0525 13:20:33.853743  158763 addons.go:131] Setting addon registry=true in "minikube"
I0525 13:20:33.853750  158763 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0525 13:20:33.853762  158763 host.go:66] Checking if "minikube" exists ...
W0525 13:20:33.853717  158763 addons.go:140] addon storage-provisioner should already be in state true
I0525 13:20:33.853802  158763 host.go:66] Checking if "minikube" exists ...
I0525 13:20:33.854179  158763 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0525 13:20:33.854341  158763 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0525 13:20:33.854350  158763 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0525 13:20:33.871349  158763 api_server.go:50] waiting for apiserver process to appear ...
I0525 13:20:33.871423  158763 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0525 13:20:33.889061  158763 api_server.go:70] duration metric: took 41.293149ms to wait for apiserver process to appear ...
I0525 13:20:33.889085  158763 api_server.go:86] waiting for apiserver healthz status ...
I0525 13:20:33.889099  158763 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0525 13:20:33.898723  158763 out.go:170]     ▪ Using image registry:2.7.1
    ▪ Using image registry:2.7.1
I0525 13:20:33.901167  158763 out.go:170]     ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0525 13:20:33.903605  158763 out.go:170]     ▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
    ▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
I0525 13:20:33.901248  158763 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0525 13:20:33.903682  158763 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0525 13:20:33.903716  158763 addons.go:261] installing /etc/kubernetes/addons/registry-rc.yaml
I0525 13:20:33.903732  158763 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
I0525 13:20:33.903797  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0525 13:20:33.903735  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0525 13:20:33.904294  158763 api_server.go:249] https://192.168.49.2:8443/healthz returned 200:
ok
I0525 13:20:33.904924  158763 api_server.go:139] control plane version: v1.20.2
I0525 13:20:33.904953  158763 api_server.go:129] duration metric: took 15.861975ms to wait for apiserver health ...
I0525 13:20:33.904975  158763 system_pods.go:43] waiting for kube-system pods to appear ...
I0525 13:20:33.905614  158763 addons.go:131] Setting addon default-storageclass=true in "minikube"
W0525 13:20:33.905635  158763 addons.go:140] addon default-storageclass should already be in state true
I0525 13:20:33.905659  158763 host.go:66] Checking if "minikube" exists ...
I0525 13:20:33.906259  158763 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0525 13:20:33.913841  158763 system_pods.go:59] 0 kube-system pods found
I0525 13:20:33.913881  158763 retry.go:31] will retry after 305.063636ms: only 0 pod(s) have shown up
I0525 13:20:33.945622  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/alex/.minikube/machines/minikube/id_rsa Username:docker}
I0525 13:20:33.953711  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/alex/.minikube/machines/minikube/id_rsa Username:docker}
I0525 13:20:33.956873  158763 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml
I0525 13:20:33.956907  158763 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0525 13:20:33.957019  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0525 13:20:33.995104  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/alex/.minikube/machines/minikube/id_rsa Username:docker}
I0525 13:20:34.037156  158763 addons.go:261] installing /etc/kubernetes/addons/registry-svc.yaml
I0525 13:20:34.037184  158763 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0525 13:20:34.047374  158763 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0525 13:20:34.052354  158763 addons.go:261] installing /etc/kubernetes/addons/registry-proxy.yaml
I0525 13:20:34.052381  158763 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
I0525 13:20:34.067138  158763 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0525 13:20:34.091841  158763 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0525 13:20:34.221298  158763 system_pods.go:59] 0 kube-system pods found
I0525 13:20:34.221329  158763 retry.go:31] will retry after 338.212508ms: only 0 pod(s) have shown up
I0525 13:20:34.305342  158763 addons.go:299] Verifying addon registry=true in "minikube"
I0525 13:20:34.308076  158763 out.go:170] 🔎  Verifying registry addon...
🔎  Verifying registry addon...
I0525 13:20:34.312833  158763 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0525 13:20:34.314677  158763 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
I0525 13:20:34.563905  158763 system_pods.go:59] 1 kube-system pods found
I0525 13:20:34.563955  158763 system_pods.go:61] "storage-provisioner" [ff85781b-d95d-4623-aa8c-795806b09b2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0525 13:20:34.563975  158763 retry.go:31] will retry after 378.459802ms: only 1 pod(s) have shown up
I0525 13:20:34.945551  158763 system_pods.go:59] 1 kube-system pods found
I0525 13:20:34.945587  158763 system_pods.go:61] "storage-provisioner" [ff85781b-d95d-4623-aa8c-795806b09b2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0525 13:20:34.945602  158763 retry.go:31] will retry after 469.882201ms: only 1 pod(s) have shown up
I0525 13:20:35.418711  158763 system_pods.go:59] 1 kube-system pods found
I0525 13:20:35.418749  158763 system_pods.go:61] "storage-provisioner" [ff85781b-d95d-4623-aa8c-795806b09b2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0525 13:20:35.418769  158763 retry.go:31] will retry after 667.365439ms: only 1 pod(s) have shown up
I0525 13:20:36.089599  158763 system_pods.go:59] 1 kube-system pods found
I0525 13:20:36.089637  158763 system_pods.go:61] "storage-provisioner" [ff85781b-d95d-4623-aa8c-795806b09b2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0525 13:20:36.089653  158763 retry.go:31] will retry after 597.243124ms: only 1 pod(s) have shown up
I0525 13:20:36.690437  158763 system_pods.go:59] 1 kube-system pods found
I0525 13:20:36.690479  158763 system_pods.go:61] "storage-provisioner" [ff85781b-d95d-4623-aa8c-795806b09b2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0525 13:20:36.690498  158763 retry.go:31] will retry after 789.889932ms: only 1 pod(s) have shown up
I0525 13:20:37.483982  158763 system_pods.go:59] 1 kube-system pods found
I0525 13:20:37.484024  158763 system_pods.go:61] "storage-provisioner" [ff85781b-d95d-4623-aa8c-795806b09b2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0525 13:20:37.484041  158763 retry.go:31] will retry after 951.868007ms: only 1 pod(s) have shown up
I0525 13:20:38.439612  158763 system_pods.go:59] 1 kube-system pods found
I0525 13:20:38.439658  158763 system_pods.go:61] "storage-provisioner" [ff85781b-d95d-4623-aa8c-795806b09b2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0525 13:20:38.439678  158763 retry.go:31] will retry after 1.341783893s: only 1 pod(s) have shown up
I0525 13:20:39.784738  158763 system_pods.go:59] 1 kube-system pods found
I0525 13:20:39.784781  158763 system_pods.go:61] "storage-provisioner" [ff85781b-d95d-4623-aa8c-795806b09b2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0525 13:20:39.784802  158763 retry.go:31] will retry after 1.876813009s: only 1 pod(s) have shown up
I0525 13:20:41.665529  158763 system_pods.go:59] 1 kube-system pods found
I0525 13:20:41.665577  158763 system_pods.go:61] "storage-provisioner" [ff85781b-d95d-4623-aa8c-795806b09b2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0525 13:20:41.665597  158763 retry.go:31] will retry after 2.6934314s: only 1 pod(s) have shown up
I0525 13:20:44.363376  158763 system_pods.go:59] 5 kube-system pods found
I0525 13:20:44.363413  158763 system_pods.go:61] "etcd-minikube" [61cb50fa-adf9-4a66-a519-9ed2239ebd4d] Pending
I0525 13:20:44.363426  158763 system_pods.go:61] "kube-apiserver-minikube" [64a4e4f2-d2a6-4327-9f38-5289e6f8c0f9] Pending
I0525 13:20:44.363437  158763 system_pods.go:61] "kube-controller-manager-minikube" [46548b18-3e93-496e-968e-a99afe47cbe6] Pending
I0525 13:20:44.363448  158763 system_pods.go:61] "kube-scheduler-minikube" [051a7ac9-f11b-40d4-a40c-6b2ab156f6f4] Pending
I0525 13:20:44.363463  158763 system_pods.go:61] "storage-provisioner" [ff85781b-d95d-4623-aa8c-795806b09b2b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0525 13:20:44.363477  158763 system_pods.go:74] duration metric: took 10.458487944s to wait for pod list to return data ...
I0525 13:20:44.363492  158763 kubeadm.go:538] duration metric: took 10.515735519s to wait for : map[apiserver:true system_pods:true] ...
I0525 13:20:44.363512  158763 node_conditions.go:102] verifying NodePressure condition ...
I0525 13:20:44.367500  158763 node_conditions.go:122] node storage ephemeral capacity is 238798492Ki
I0525 13:20:44.367526  158763 node_conditions.go:123] node cpu capacity is 16
I0525 13:20:44.367551  158763 node_conditions.go:105] duration metric: took 4.027902ms to run NodePressure ...
I0525 13:20:44.367564  158763 start.go:206] waiting for startup goroutines ...
I0525 13:20:48.318578  158763 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
I0525 13:20:48.318606  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:48.818655  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:49.319231  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:49.818582  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:50.319180  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:50.818370  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:51.318892  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:51.819670  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:52.319553  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:52.819288  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:53.318679  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:53.818946  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:54.321934  158763 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0525 13:20:54.321964  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:54.819725  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:55.319306  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:55.819313  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:56.319430  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:56.818731  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:57.319744  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:57.819696  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:58.319730  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:58.819509  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:59.319331  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:20:59.821086  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:21:00.318970  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:21:00.819083  158763 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0525 13:21:01.319000  158763 kapi.go:108] duration metric: took 27.00616237s to wait for kubernetes.io/minikube-addons=registry ...
I0525 13:21:01.321738  158763 out.go:170] 🌟  Enabled addons: storage-provisioner, default-storageclass, registry
🌟  Enabled addons: storage-provisioner, default-storageclass, registry
I0525 13:21:01.321783  158763 addons.go:330] enableAddons completed in 27.474004346s
I0525 13:21:01.326411  158763 out.go:170] 

I0525 13:21:01.326821  158763 profile.go:148] Saving config to /home/alex/.minikube/profiles/minikube/config.json ...
I0525 13:21:01.329640  158763 out.go:170] 👍  Starting node minikube-m02 in cluster minikube
👍  Starting node minikube-m02 in cluster minikube
I0525 13:21:01.329679  158763 cache.go:111] Beginning downloading kic base image for docker with containerd
W0525 13:21:01.329694  158763 out.go:424] no arguments passed for "🚜  Pulling base image ...\n" - returning raw string
W0525 13:21:01.329716  158763 out.go:424] no arguments passed for "🚜  Pulling base image ...\n" - returning raw string
I0525 13:21:01.335592  158763 out.go:170] 🚜  Pulling base image ...
🚜  Pulling base image ...
I0525 13:21:01.335644  158763 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
I0525 13:21:01.335696  158763 preload.go:106] Found local preload: /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
I0525 13:21:01.335706  158763 cache.go:54] Caching tarball of preloaded images
I0525 13:21:01.335728  158763 preload.go:132] Found /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0525 13:21:01.335729  158763 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory
I0525 13:21:01.335740  158763 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on containerd
I0525 13:21:01.335781  158763 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull
I0525 13:21:01.335795  158763 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull
I0525 13:21:01.335837  158763 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon
I0525 13:21:01.335902  158763 profile.go:148] Saving config to /home/alex/.minikube/profiles/minikube/config.json ...
I0525 13:21:01.401187  158763 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull
I0525 13:21:01.401217  158763 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull
I0525 13:21:01.401232  158763 cache.go:194] Successfully downloaded all kic artifacts
I0525 13:21:01.401273  158763 start.go:313] acquiring machines lock for minikube-m02: {Name:mk211c6eaaecc693e253d70f9aa7ed66134c8127 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0525 13:21:01.401376  158763 start.go:317] acquired machines lock for "minikube-m02" in 79.771µs
I0525 13:21:01.401399  158763 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8192 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:auto NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true registry:true storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true} &{Name:m02 IP: Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}
I0525 13:21:01.401489  158763 start.go:126] createHost starting for "m02" (driver="docker")
I0525 13:21:01.404315  158763 out.go:197] 🔥  Creating docker container (CPUs=4, Memory=8192MB) ...
🔥  Creating docker container (CPUs=4, Memory=8192MB) ...| I0525 13:21:01.404448  158763 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0525 13:21:01.404477  158763 client.go:168] LocalClient.Create starting
I0525 13:21:01.404549  158763 main.go:128] libmachine: Reading certificate data from /home/alex/.minikube/certs/ca.pem
I0525 13:21:01.404595  158763 main.go:128] libmachine: Decoding PEM data...
I0525 13:21:01.404629  158763 main.go:128] libmachine: Parsing certificate...
I0525 13:21:01.404809  158763 main.go:128] libmachine: Reading certificate data from /home/alex/.minikube/certs/cert.pem
I0525 13:21:01.404850  158763 main.go:128] libmachine: Decoding PEM data...
I0525 13:21:01.404879  158763 main.go:128] libmachine: Parsing certificate...
I0525 13:21:01.405262  158763 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0525 13:21:01.439674  158763 network_create.go:61] Found existing network {name:minikube subnet:0xc001120060 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
I0525 13:21:01.439723  158763 kic.go:106] calculated static IP "192.168.49.3" for the "minikube-m02" container
I0525 13:21:01.439831  158763 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0525 13:21:01.476993  158763 cli_runner.go:115] Run: docker volume create minikube-m02 --label name.minikube.sigs.k8s.io=minikube-m02 --label created_by.minikube.sigs.k8s.io=true
/ I0525 13:21:01.518061  158763 oci.go:102] Successfully created a docker volume minikube-m02
I0525 13:21:01.518155  158763 cli_runner.go:115] Run: docker run --rm --name minikube-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube-m02 --entrypoint /usr/bin/test -v minikube-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib
| I0525 13:21:02.299337  158763 oci.go:106] Successfully prepared a docker volume minikube-m02
W0525 13:21:02.299394  158763 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0525 13:21:02.299409  158763 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0525 13:21:02.299406  158763 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
I0525 13:21:02.299492  158763 preload.go:106] Found local preload: /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
I0525 13:21:02.299502  158763 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0525 13:21:02.299506  158763 kic.go:179] Starting extracting preloaded images to volume ...
I0525 13:21:02.299611  158763 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir
/ I0525 13:21:02.391071  158763 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube-m02 --name minikube-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube-m02 --network minikube --ip 192.168.49.3 --volume minikube-m02:/var --security-opt apparmor=unconfined --cpus=4 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e
\ I0525 13:21:02.922216  158763 cli_runner.go:115] Run: docker container inspect minikube-m02 --format={{.State.Running}}
I0525 13:21:02.965054  158763 cli_runner.go:115] Run: docker container inspect minikube-m02 --format={{.State.Status}}
I0525 13:21:03.003703  158763 cli_runner.go:115] Run: docker exec minikube-m02 stat /var/lib/dpkg/alternatives/iptables
| I0525 13:21:03.106049  158763 oci.go:278] the created container "minikube-m02" has a running status.
I0525 13:21:03.106082  158763 kic.go:210] Creating ssh key for kic: /home/alex/.minikube/machines/minikube-m02/id_rsa...
- I0525 13:21:03.287231  158763 kic_runner.go:188] docker (temp): /home/alex/.minikube/machines/minikube-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
\ I0525 13:21:03.411205  158763 cli_runner.go:115] Run: docker container inspect minikube-m02 --format={{.State.Status}}
| I0525 13:21:03.460428  158763 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0525 13:21:03.460455  158763 kic_runner.go:115] Args: [docker exec --privileged minikube-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
| I0525 13:21:06.307167  158763 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/alex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (4.007503271s)
I0525 13:21:06.307204  158763 kic.go:188] duration metric: took 4.007695 seconds to extract preloaded images to volume
I0525 13:21:06.307336  158763 cli_runner.go:115] Run: docker container inspect minikube-m02 --format={{.State.Status}}
/ I0525 13:21:06.345905  158763 machine.go:88] provisioning docker machine ...
I0525 13:21:06.345952  158763 ubuntu.go:169] provisioning hostname "minikube-m02"
I0525 13:21:06.346043  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0525 13:21:06.381936  158763 main.go:128] libmachine: Using SSH client type: native
I0525 13:21:06.382162  158763 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
I0525 13:21:06.382189  158763 main.go:128] libmachine: About to run SSH command:
sudo hostname minikube-m02 && echo "minikube-m02" | sudo tee /etc/hostname
- I0525 13:21:06.519878  158763 main.go:128] libmachine: SSH cmd err, output: <nil>: minikube-m02

I0525 13:21:06.519992  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
\ I0525 13:21:06.561587  158763 main.go:128] libmachine: Using SSH client type: native
I0525 13:21:06.561779  158763 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
I0525 13:21:06.561808  158763 main.go:128] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
| I0525 13:21:06.684595  158763 main.go:128] libmachine: SSH cmd err, output: <nil>: 
I0525 13:21:06.684631  158763 ubuntu.go:175] set auth options {CertDir:/home/alex/.minikube CaCertPath:/home/alex/.minikube/certs/ca.pem CaPrivateKeyPath:/home/alex/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/alex/.minikube/machines/server.pem ServerKeyPath:/home/alex/.minikube/machines/server-key.pem ClientKeyPath:/home/alex/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/alex/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/alex/.minikube}
I0525 13:21:06.684663  158763 ubuntu.go:177] setting up certificates
I0525 13:21:06.684677  158763 provision.go:83] configureAuth start
I0525 13:21:06.684766  158763 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m02
I0525 13:21:06.724437  158763 provision.go:137] copyHostCerts
I0525 13:21:06.724504  158763 exec_runner.go:145] found /home/alex/.minikube/ca.pem, removing ...
I0525 13:21:06.724519  158763 exec_runner.go:190] rm: /home/alex/.minikube/ca.pem
I0525 13:21:06.724602  158763 exec_runner.go:152] cp: /home/alex/.minikube/certs/ca.pem --> /home/alex/.minikube/ca.pem (1029 bytes)
I0525 13:21:06.724710  158763 exec_runner.go:145] found /home/alex/.minikube/cert.pem, removing ...
I0525 13:21:06.724723  158763 exec_runner.go:190] rm: /home/alex/.minikube/cert.pem
I0525 13:21:06.724767  158763 exec_runner.go:152] cp: /home/alex/.minikube/certs/cert.pem --> /home/alex/.minikube/cert.pem (1070 bytes)
I0525 13:21:06.724839  158763 exec_runner.go:145] found /home/alex/.minikube/key.pem, removing ...
I0525 13:21:06.724851  158763 exec_runner.go:190] rm: /home/alex/.minikube/key.pem
I0525 13:21:06.724891  158763 exec_runner.go:152] cp: /home/alex/.minikube/certs/key.pem --> /home/alex/.minikube/key.pem (1679 bytes)
I0525 13:21:06.724951  158763 provision.go:111] generating server cert: /home/alex/.minikube/machines/server.pem ca-key=/home/alex/.minikube/certs/ca.pem private-key=/home/alex/.minikube/certs/ca-key.pem org=alex.minikube-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube minikube-m02]
- I0525 13:21:06.888149  158763 provision.go:165] copyRemoteCerts
I0525 13:21:06.888220  158763 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0525 13:21:06.888266  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
\ I0525 13:21:06.932441  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/alex/.minikube/machines/minikube-m02/id_rsa Username:docker}
I0525 13:21:07.025563  158763 ssh_runner.go:316] scp /home/alex/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1029 bytes)
| I0525 13:21:07.047321  158763 ssh_runner.go:316] scp /home/alex/.minikube/machines/server.pem --> /etc/docker/server.pem (1159 bytes)
I0525 13:21:07.067914  158763 ssh_runner.go:316] scp /home/alex/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0525 13:21:07.088666  158763 provision.go:86] duration metric: configureAuth took 403.9699ms
I0525 13:21:07.088697  158763 ubuntu.go:193] setting minikube options for container-runtime
I0525 13:21:07.088903  158763 machine.go:91] provisioned docker machine in 742.971827ms
I0525 13:21:07.088920  158763 client.go:171] LocalClient.Create took 5.684431272s
I0525 13:21:07.088939  158763 start.go:168] duration metric: libmachine.API.Create for "minikube" took 5.684492232s
I0525 13:21:07.088954  158763 start.go:267] post-start starting for "minikube-m02" (driver="docker")
I0525 13:21:07.088968  158763 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0525 13:21:07.089046  158763 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0525 13:21:07.089110  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
/ I0525 13:21:07.129169  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/alex/.minikube/machines/minikube-m02/id_rsa Username:docker}
I0525 13:21:07.225839  158763 ssh_runner.go:149] Run: cat /etc/os-release
I0525 13:21:07.229086  158763 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0525 13:21:07.229121  158763 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0525 13:21:07.229141  158763 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0525 13:21:07.229157  158763 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0525 13:21:07.229171  158763 filesync.go:118] Scanning /home/alex/.minikube/addons for local assets ...
- I0525 13:21:07.229299  158763 filesync.go:118] Scanning /home/alex/.minikube/files for local assets ...
I0525 13:21:07.229347  158763 start.go:270] post-start completed in 140.376337ms
I0525 13:21:07.229780  158763 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m02
I0525 13:21:07.276381  158763 profile.go:148] Saving config to /home/alex/.minikube/profiles/minikube/config.json ...
I0525 13:21:07.276726  158763 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0525 13:21:07.276802  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0525 13:21:07.318885  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/alex/.minikube/machines/minikube-m02/id_rsa Username:docker}
\ I0525 13:21:07.408850  158763 start.go:129] duration metric: createHost completed in 6.007345551s
I0525 13:21:07.408882  158763 start.go:80] releasing machines lock for "minikube-m02", held for 6.007488731s
I0525 13:21:07.408990  158763 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m02
| W0525 13:21:07.447897  158763 out.go:424] no arguments passed for "🌐  Found network options:\n" - returning raw string
I0525 13:21:07.450571  158763 out.go:170] 🌐  Found network options:

🌐  Found network options:
I0525 13:21:07.453025  158763 out.go:170]     ▪ NO_PROXY=192.168.49.2
    ▪ NO_PROXY=192.168.49.2
W0525 13:21:07.453090  158763 proxy.go:118] fail to check proxy env: Error ip not in block
W0525 13:21:07.453122  158763 proxy.go:118] fail to check proxy env: Error ip not in block
I0525 13:21:07.453220  158763 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0525 13:21:07.453267  158763 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0525 13:21:07.453283  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0525 13:21:07.453361  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0525 13:21:07.501635  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/alex/.minikube/machines/minikube-m02/id_rsa Username:docker}
I0525 13:21:07.501928  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/alex/.minikube/machines/minikube-m02/id_rsa Username:docker}
I0525 13:21:07.662624  158763 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0525 13:21:07.674169  158763 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
I0525 13:21:07.696031  158763 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
I0525 13:21:07.708040  158763 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
I0525 13:21:07.802154  158763 ssh_runner.go:149] Run: sudo systemctl mask docker.service
I0525 13:21:07.871834  158763 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0525 13:21:07.881637  158763 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0525 13:21:07.896937  158763 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5tayIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLmxpbnV4XQogICAgc2hpbSA9ICJjb250YWluZXJkLXNoaW0iCiAgICBydW50aW1lID0gInJ1bmMiCiAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgbm9fc2hpbSA9IGZhbHNlCiAgICBzaGltX2RlYnVnID0gZmFsc2UKICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
I0525 13:21:07.912869  158763 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0525 13:21:07.920052  158763 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0525 13:21:07.927340  158763 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0525 13:21:08.000281  158763 ssh_runner.go:149] Run: sudo systemctl restart containerd
I0525 13:21:08.082984  158763 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock
I0525 13:21:08.083055  158763 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0525 13:21:08.087107  158763 start.go:393] Will wait 60s for crictl version
I0525 13:21:08.087164  158763 ssh_runner.go:149] Run: sudo crictl version
I0525 13:21:08.111177  158763 retry.go:31] will retry after 7.142638726s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:

stderr:
time="2021-05-25T17:21:08Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0525 13:21:15.254906  158763 ssh_runner.go:149] Run: sudo crictl version
I0525 13:21:15.281961  158763 start.go:402] Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  1.4.4
RuntimeApiVersion:  v1alpha2
I0525 13:21:15.282051  158763 ssh_runner.go:149] Run: containerd --version
I0525 13:21:15.310201  158763 out.go:170] 📦  Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
📦  Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
I0525 13:21:15.314897  158763 out.go:170]     ▪ env NO_PROXY=192.168.49.2
    ▪ env NO_PROXY=192.168.49.2
I0525 13:21:15.314983  158763 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0525 13:21:15.352592  158763 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
I0525 13:21:15.356021  158763 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0525 13:21:15.366042  158763 certs.go:52] Setting up /home/alex/.minikube/profiles/minikube for IP: 192.168.49.3
I0525 13:21:15.366100  158763 certs.go:171] skipping minikubeCA CA generation: /home/alex/.minikube/ca.key
I0525 13:21:15.366123  158763 certs.go:171] skipping proxyClientCA CA generation: /home/alex/.minikube/proxy-client-ca.key
I0525 13:21:15.366215  158763 certs.go:361] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/ca-key.pem (1679 bytes)
I0525 13:21:15.366260  158763 certs.go:361] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/ca.pem (1029 bytes)
I0525 13:21:15.366293  158763 certs.go:361] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/cert.pem (1070 bytes)
I0525 13:21:15.366324  158763 certs.go:361] found cert: /home/alex/.minikube/certs/home/alex/.minikube/certs/key.pem (1679 bytes)
I0525 13:21:15.366798  158763 ssh_runner.go:316] scp /home/alex/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0525 13:21:15.384317  158763 ssh_runner.go:316] scp /home/alex/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0525 13:21:15.403531  158763 ssh_runner.go:316] scp /home/alex/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0525 13:21:15.423715  158763 ssh_runner.go:316] scp /home/alex/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0525 13:21:15.443983  158763 ssh_runner.go:316] scp /home/alex/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0525 13:21:15.463732  158763 ssh_runner.go:149] Run: openssl version
I0525 13:21:15.468999  158763 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0525 13:21:15.477965  158763 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0525 13:21:15.481556  158763 certs.go:402] hashing: -rw-r--r-- 1 root root 1066 May 11  2020 /usr/share/ca-certificates/minikubeCA.pem
I0525 13:21:15.481617  158763 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0525 13:21:15.486872  158763 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0525 13:21:15.494909  158763 ssh_runner.go:149] Run: sudo crictl info
I0525 13:21:15.518849  158763 cni.go:93] Creating CNI manager for "auto"
I0525 13:21:15.518874  158763 cni.go:154] 2 nodes found, recommending kindnet
I0525 13:21:15.518897  158763 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0525 13:21:15.518927  158763 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube-m02 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0525 13:21:15.519094  158763 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  name: "minikube-m02"
  kubeletExtraArgs:
    node-ip: 192.168.49.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249

I0525 13:21:15.519338  158763 kubeadm.go:901] kubelet [Unit]
Wants=containerd.service

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=minikube-m02 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3 --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:auto NodeIP: NodePort:8443 NodeName:}
I0525 13:21:15.519454  158763 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
I0525 13:21:15.527501  158763 binaries.go:44] Found k8s binaries, skipping transfer
I0525 13:21:15.527604  158763 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0525 13:21:15.535734  158763 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (557 bytes)
I0525 13:21:15.550494  158763 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0525 13:21:15.565898  158763 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
I0525 13:21:15.569024  158763 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0525 13:21:15.579663  158763 host.go:66] Checking if "minikube" exists ...
I0525 13:21:15.579957  158763 start.go:216] JoinCluster: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8192 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:auto NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true registry:true storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
I0525 13:21:15.580068  158763 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm token create --print-join-command --ttl=0"
I0525 13:21:15.580165  158763 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0525 13:21:15.622931  158763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/alex/.minikube/machines/minikube/id_rsa Username:docker}
I0525 13:21:15.780883  158763 start.go:237] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}
I0525 13:21:15.785172  158763 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm join control-plane.minikube.internal:8443 --token zlhspr.2b3cz5ezxgpkaqyx     --discovery-token-ca-cert-hash sha256:8d8b5731f40dfed689486a98fadb14bfa7322ebf9a7efb5703183f2638a41118 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=minikube-m02"
I0525 13:21:28.112881  158763 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm join control-plane.minikube.internal:8443 --token zlhspr.2b3cz5ezxgpkaqyx     --discovery-token-ca-cert-hash sha256:8d8b5731f40dfed689486a98fadb14bfa7322ebf9a7efb5703183f2638a41118 --ignore-preflight-errors=all --cri-socket /run/containerd/containerd.sock --node-name=minikube-m02": (12.327666945s)
I0525 13:21:28.112915  158763 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0525 13:21:28.292529  158763 start.go:218] JoinCluster complete in 12.712567339s
I0525 13:21:28.292559  158763 cni.go:93] Creating CNI manager for "auto"
I0525 13:21:28.292572  158763 cni.go:154] 2 nodes found, recommending kindnet
I0525 13:21:28.292652  158763 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
I0525 13:21:28.295835  158763 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
I0525 13:21:28.295851  158763 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0525 13:21:28.310722  158763 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0525 13:21:28.507750  158763 start.go:201] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}
W0525 13:21:28.507813  158763 out.go:424] no arguments passed for "🔎  Verifying Kubernetes components...\n" - returning raw string
W0525 13:21:28.507846  158763 out.go:424] no arguments passed for "🔎  Verifying Kubernetes components...\n" - returning raw string
I0525 13:21:28.510562  158763 out.go:170] 🔎  Verifying Kubernetes components...
🔎  Verifying Kubernetes components...
I0525 13:21:28.510669  158763 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0525 13:21:28.525844  158763 kubeadm.go:538] duration metric: took 18.034903ms to wait for : map[apiserver:true system_pods:true] ...
I0525 13:21:28.525875  158763 node_conditions.go:102] verifying NodePressure condition ...
I0525 13:21:28.529442  158763 node_conditions.go:122] node storage ephemeral capacity is 238798492Ki
I0525 13:21:28.529464  158763 node_conditions.go:123] node cpu capacity is 16
I0525 13:21:28.529478  158763 node_conditions.go:122] node storage ephemeral capacity is 238798492Ki
I0525 13:21:28.529491  158763 node_conditions.go:123] node cpu capacity is 16
I0525 13:21:28.529499  158763 node_conditions.go:105] duration metric: took 3.617211ms to run NodePressure ...
I0525 13:21:28.529509  158763 start.go:206] waiting for startup goroutines ...
I0525 13:21:28.578822  158763 start.go:460] kubectl: 1.21.0, cluster: 1.20.2 (minor skew: 1)
I0525 13:21:28.581773  158763 out.go:170] 🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@ilya-zuyev ilya-zuyev added the kind/support Categorizes issue or PR as a support question. label May 25, 2021
@andriyDev andriyDev added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Jun 2, 2021
@andriyDev
Copy link
Contributor

andriyDev commented Jun 2, 2021

This is a known problem. Help wanted!

@spowelljr spowelljr added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. and removed kind/support Categorizes issue or PR as a support question. labels Jul 14, 2021
@afbjorklund
Copy link
Collaborator

afbjorklund commented Aug 9, 2021

Hmm, the proxy is supposed to be a DaemonSet so it should be running on every node ?

description here: cluster/addons/registry#expose-the-registry-on-each-node

But it is also very ancient (2016), so there could be something else wrong with it (nginx).

https://github.com/kubernetes/kubernetes/tree/21b5afa8104568ad3e1b195327c7754e92c63812/cluster/addons/registry


EDIT: Worked for me, though.

docker@minikube:~$ curl http://localhost:5000/v2/
{}docker@minikube:~$ exit
docker@minikube-m02:~$ curl http://localhost:5000/v2/
{}docker@minikube-m02:~$ logout
$ minikube kubectl -- get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-558bd4d5db-2wtm4           1/1     Running   0          103s
kube-system   etcd-minikube                      1/1     Running   0          104s
kube-system   kindnet-95m5x                      1/1     Running   0          103s
kube-system   kindnet-md6b9                      1/1     Running   0          52s
kube-system   kube-apiserver-minikube            1/1     Running   0          119s
kube-system   kube-controller-manager-minikube   1/1     Running   0          104s
kube-system   kube-proxy-bk7kf                   1/1     Running   0          103s
kube-system   kube-proxy-xp6q7                   1/1     Running   0          52s
kube-system   kube-scheduler-minikube            1/1     Running   0          104s
kube-system   registry-6fv4q                     1/1     Running   0          103s
kube-system   registry-proxy-rvlq7               1/1     Running   0          94s
kube-system   registry-proxy-sh9lc               1/1     Running   0          42s
kube-system   storage-provisioner                1/1     Running   0          112s

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 7, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 7, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/addons help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

7 participants