Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed minikube start on Windows 10: error execution phase certs/ca: failure loading ca certificate: failed to load certificate: the certificate is not valid yet #10474

Closed
liakaz opened this issue Feb 14, 2021 · 3 comments
Labels
kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@liakaz
Copy link

liakaz commented Feb 14, 2021

Steps to reproduce the issue:

  1. Failure happens with the latest minikube 1.17.1, downgraded to 1.40.0 failure persists
  2. Tried the latest k8s version as well as 1.19.0

Full output of failed command:

Full output of minikube start command used, if not already included:

C:\Windows\System32> minikube start --kubernetes-version v1.19.0 --alsologtostderr
I0213 23:28:59.986461    1368 out.go:191] Setting JSON to false
I0213 23:28:59.995464    1368 start.go:103] hostinfo: {"hostname":"snorlaxium2","uptime":170658,"bootTime":1613117081,"procs":275,"os":"windows","platform":"Microsoft Windows 10 Enterprise","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"0e918d69-fe6d-4c07-b5f0-26725e3a9e40"}
W0213 23:28:59.995464    1368 start.go:111] gopshost.Virtualization returned error: not implemented yet
I0213 23:29:00.044462    1368 out.go:109] * minikube v1.14.0 on Microsoft Windows 10 Enterprise 10.0.19042 Build 19042
* minikube v1.14.0 on Microsoft Windows 10 Enterprise 10.0.19042 Build 19042
I0213 23:29:00.047466    1368 driver.go:288] Setting default libvirt URI to qemu:///system
I0213 23:29:00.047466    1368 global.go:102] Querying for installed drivers using PATH=C:\Program Files\PowerShell\7;C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin;C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\Scripts\;C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\;C:\Program Files\Common Files\Microsoft Shared\Microsoft Online Services;C:\Program Files (x86)\Common Files\Microsoft Shared\Microsoft Online Services;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\dotnet\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\;C:\Program Files\erl10.3\bin;C:\Program Files\Git\cmd;C:\Program Files\PowerShell\7\;C:\Program Files\Docker\Docker\resources\bin;C:\ProgramData\DockerDesktop\version-bin;C:\Program Files (x86)\dotnet\;C:\ProgramData\chocolatey\bin;C:\Program Files\Kubernetes\Minikube;C:\Program Files\Docker\Docker\Resources\bin;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\windows\System32\OpenSSH\;C:\Program Files\PowerShell\6\;C:\Users\liakaz\AppData\Local\Microsoft\WindowsApps;C:\Users\liakaz\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\liakaz\AppData\Local\Programs\Fiddler;C:\Users\liakaz\AppData\Roaming\Python\Python37\Scripts;C:\Users\liakaz\.dotnet\tools;C:\Users\liakaz\.dotnet\tools;C:\Users\liakaz\AppData\Local\Microsoft\WindowsApps
I0213 23:29:00.071460    1368 global.go:110] vmware priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in %PATH% Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0213 23:29:00.419280    1368 docker.go:117] docker version: linux-19.03.8
I0213 23:29:00.433252    1368 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0213 23:29:05.148011    1368 cli_runner.go:154] Completed: docker system info --format "{{json .}}": (4.7147765s)
I0213 23:29:05.148011    1368 info.go:253] docker info: {ID:AFBG:XCX3:Z7EI:SHGQ:BASF:A6IF:LYUV:MRYC:3XNT:FV25:Z27U:Y7P2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem <unknown>] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-02-13 04:56:11.971530695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:2087813120 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0213 23:29:05.149981    1368 global.go:110] docker priority: 8, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0213 23:29:08.454112    1368 global.go:110] hyperv priority: 7, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0213 23:29:08.478110    1368 global.go:110] podman priority: 2, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in %PATH% Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0213 23:29:09.063445    1368 global.go:110] virtualbox priority: 5, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0213 23:29:09.063445    1368 driver.go:270] Picked: docker
I0213 23:29:09.065405    1368 driver.go:271] Alternatives: [hyperv virtualbox]
I0213 23:29:09.068404    1368 driver.go:272] Rejects: [vmware podman]
I0213 23:29:09.079445    1368 out.go:109] * Automatically selected the docker driver. Other choices: hyperv, virtualbox
* Automatically selected the docker driver. Other choices: hyperv, virtualbox
I0213 23:29:09.079445    1368 start.go:272] selected driver: docker
I0213 23:29:09.080401    1368 start.go:680] validating driver "docker" against <nil>
I0213 23:29:09.083410    1368 start.go:691] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0213 23:29:09.106434    1368 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0213 23:29:10.587325    1368 cli_runner.go:154] Completed: docker system info --format "{{json .}}": (1.4808964s)
I0213 23:29:10.587325    1368 info.go:253] docker info: {ID:AFBG:XCX3:Z7EI:SHGQ:BASF:A6IF:LYUV:MRYC:3XNT:FV25:Z27U:Y7P2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem <unknown>] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2021-02-13 04:56:20.738318395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:2087813120 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0213 23:29:10.589295    1368 start_flags.go:228] no existing cluster config was found, will generate one from the flags
I0213 23:29:11.692666    1368 start_flags.go:246] Using suggested 1991MB memory alloc based on sys=16301MB, container=1991MB
I0213 23:29:11.693668    1368 start_flags.go:626] Wait components to verify : map[apiserver:true system_pods:true]
I0213 23:29:11.694670    1368 cni.go:74] Creating CNI manager for ""
I0213 23:29:11.694670    1368 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0213 23:29:11.695679    1368 start_flags.go:353] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:1991 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
I0213 23:29:11.744696    1368 out.go:109] * Starting control plane node minikube in cluster minikube
* Starting control plane node minikube in cluster minikube
I0213 23:29:12.095205    1368 cache.go:119] Beginning downloading kic base image for docker with docker
I0213 23:29:12.153165    1368 out.go:109] * Pulling base image ...
* Pulling base image ...
I0213 23:29:12.155165    1368 preload.go:97] Checking if preload exists for k8s version v1.19.0 and runtime docker
I0213 23:29:12.155165    1368 localpath.go:128] windows sanitize: C:\Users\liakaz\.minikube\cache\images\gcr.io\k8s-minikube\kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -> C:\Users\liakaz\.minikube\cache\images\gcr.io\k8s-minikube\kicbase_v0.0.13@sha256_4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f
I0213 23:29:12.156187    1368 preload.go:105] Found local preload: C:\Users\liakaz\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v6-v1.19.0-docker-overlay2-amd64.tar.lz4
I0213 23:29:12.157169    1368 cache.go:53] Caching tarball of preloaded images
I0213 23:29:12.156187    1368 cache.go:142] Downloading gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f to local daemon
I0213 23:29:12.158168    1368 preload.go:131] Found C:\Users\liakaz\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v6-v1.19.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0213 23:29:12.158168    1368 image.go:140] Writing gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f to local daemon
I0213 23:29:12.159166    1368 cache.go:56] Finished verifying existence of preloaded tar for  v1.19.0 on docker
I0213 23:29:12.160164    1368 profile.go:150] Saving config to C:\Users\liakaz\.minikube\profiles\minikube\config.json ...
I0213 23:29:12.160164    1368 lock.go:35] WriteFile acquiring C:\Users\liakaz\.minikube\profiles\minikube\config.json: {Name:mk091a90a53610db4a36f99cd17106add260febb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 23:30:28.944267    1368 cache.go:145] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f
I0213 23:30:28.945263    1368 cache.go:182] Successfully downloaded all kic artifacts
I0213 23:30:28.949263    1368 start.go:314] acquiring machines lock for minikube: {Name:mkf4e98171a5682457361302c86f7f57b491388b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0213 23:30:28.951266    1368 start.go:318] acquired machines lock for "minikube" in 999.2µs
I0213 23:30:28.952268    1368 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:1991 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.19.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]} &{Name: IP: Port:8443 KubernetesVersion:v1.19.0 ControlPlane:true Worker:true}
I0213 23:30:28.952268    1368 start.go:127] createHost starting for "" (driver="docker")
I0213 23:30:29.003278    1368 out.go:109] * Creating docker container (CPUs=2, Memory=1991MB) ...
* Creating docker container (CPUs=2, Memory=1991MB) ...
I0213 23:30:29.006265    1368 start.go:164] libmachine.API.Create for "minikube" (driver="docker")
I0213 23:30:29.006265    1368 client.go:165] LocalClient.Create starting
I0213 23:30:29.007265    1368 main.go:118] libmachine: Reading certificate data from C:\Users\liakaz\.minikube\certs\ca.pem
I0213 23:30:29.019273    1368 main.go:118] libmachine: Decoding PEM data...
I0213 23:30:29.020301    1368 main.go:118] libmachine: Parsing certificate...
I0213 23:30:29.021263    1368 main.go:118] libmachine: Reading certificate data from C:\Users\liakaz\.minikube\certs\cert.pem
I0213 23:30:29.032270    1368 main.go:118] libmachine: Decoding PEM data...
I0213 23:30:29.032270    1368 main.go:118] libmachine: Parsing certificate...
I0213 23:30:29.066268    1368 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}}"
W0213 23:30:29.473263    1368 cli_runner.go:148] docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}}" returned with exit code 1
I0213 23:30:29.488265    1368 network_create.go:131] running [docker network inspect minikube] to gather additional debugging logs...
I0213 23:30:29.488265    1368 cli_runner.go:110] Run: docker network inspect minikube
W0213 23:30:29.830901    1368 cli_runner.go:148] docker network inspect minikube returned with exit code 1
I0213 23:30:29.830901    1368 network_create.go:134] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0213 23:30:29.832895    1368 network_create.go:136] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: No such network: minikube

** /stderr **
I0213 23:30:29.832895    1368 network_create.go:85] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1...
I0213 23:30:29.843899    1368 cli_runner.go:110] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube
I0213 23:30:30.358892    1368 kic.go:93] calculated static IP "192.168.49.2" for the "minikube" container
I0213 23:30:30.383892    1368 cli_runner.go:110] Run: docker ps -a --format {{.Names}}
I0213 23:30:30.731772    1368 cli_runner.go:110] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0213 23:30:31.089485    1368 oci.go:101] Successfully created a docker volume minikube
I0213 23:30:31.104449    1368 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -d /var/lib
I0213 23:30:33.613585    1368 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -d /var/lib: (2.5091452s)
I0213 23:30:33.613585    1368 oci.go:105] Successfully prepared a docker volume minikube
I0213 23:30:33.615559    1368 preload.go:97] Checking if preload exists for k8s version v1.19.0 and runtime docker
I0213 23:30:33.615559    1368 preload.go:105] Found local preload: C:\Users\liakaz\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v6-v1.19.0-docker-overlay2-amd64.tar.lz4
I0213 23:30:33.616558    1368 kic.go:148] Starting extracting preloaded images to volume ...
I0213 23:30:33.626585    1368 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0213 23:30:33.627556    1368 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\liakaz\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v6-v1.19.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -I lz4 -xvf /preloaded.tar -C /extractDir
I0213 23:30:35.057356    1368 cli_runner.go:154] Completed: docker system info --format "{{json .}}": (1.4307769s)
I0213 23:30:35.057356    1368 info.go:253] docker info: {ID:AFBG:XCX3:Z7EI:SHGQ:BASF:A6IF:LYUV:MRYC:3XNT:FV25:Z27U:Y7P2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:47 Driver:overlay2 DriverStatus:[[Backing Filesystem <unknown>] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2021-02-13 04:57:45.239562495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:2087813120 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0213 23:30:35.076353    1368 cli_runner.go:110] Run: docker info --format "'{{json .SecurityOptions}}'"
I0213 23:30:36.462492    1368 cli_runner.go:154] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.3861433s)
I0213 23:30:36.474460    1368 cli_runner.go:110] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=1991mb --memory-swap=1991mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f
I0213 23:30:38.282716    1368 cli_runner.go:154] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=1991mb --memory-swap=1991mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f: (1.808263s)
I0213 23:30:38.299718    1368 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Running}}
I0213 23:30:38.702675    1368 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0213 23:30:39.208139    1368 cli_runner.go:110] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0213 23:30:39.749143    1368 oci.go:244] the created container "minikube" has a running status.
I0213 23:30:39.749143    1368 kic.go:179] Creating ssh key for kic: C:\Users\liakaz\.minikube\machines\minikube\id_rsa...
I0213 23:30:39.971141    1368 kic_runner.go:179] docker (temp): C:\Users\liakaz\.minikube\machines\minikube\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0213 23:30:40.536593    1368 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0213 23:30:41.009232    1368 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0213 23:30:41.009669    1368 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0213 23:31:29.675028    1368 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\liakaz\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v6-v1.19.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -I lz4 -xvf /preloaded.tar -C /extractDir: (56.0476553s)
I0213 23:31:29.675028    1368 kic.go:157] duration metric: took 56.058654 seconds to extract preloaded images to volume
I0213 23:31:29.706029    1368 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0213 23:31:30.192025    1368 machine.go:88] provisioning docker machine ...
I0213 23:31:30.192025    1368 ubuntu.go:166] provisioning hostname "minikube"
I0213 23:31:30.208024    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0213 23:31:30.646093    1368 main.go:118] libmachine: Using SSH client type: native
I0213 23:31:30.647096    1368 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
I0213 23:31:30.648097    1368 main.go:118] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0213 23:31:30.843256    1368 main.go:118] libmachine: SSH cmd err, output: <nil>: minikube

I0213 23:31:30.856254    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0213 23:31:31.248254    1368 main.go:118] libmachine: Using SSH client type: native
I0213 23:31:31.249254    1368 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
I0213 23:31:31.251254    1368 main.go:118] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
                        fi
                fi
I0213 23:31:31.459306    1368 main.go:118] libmachine: SSH cmd err, output: <nil>:
I0213 23:31:31.459306    1368 ubuntu.go:172] set auth options {CertDir:C:\Users\liakaz\.minikube CaCertPath:C:\Users\liakaz\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\liakaz\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\liakaz\.minikube\machines\server.pem ServerKeyPath:C:\Users\liakaz\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\liakaz\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\liakaz\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\liakaz\.minikube}
I0213 23:31:31.461305    1368 ubuntu.go:174] setting up certificates
I0213 23:31:31.461305    1368 provision.go:82] configureAuth start
I0213 23:31:31.477306    1368 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0213 23:31:31.837335    1368 provision.go:131] copyHostCerts
I0213 23:31:31.837335    1368 exec_runner.go:91] found C:\Users\liakaz\.minikube/ca.pem, removing ...
I0213 23:31:31.839304    1368 exec_runner.go:98] cp: C:\Users\liakaz\.minikube\certs\ca.pem --> C:\Users\liakaz\.minikube/ca.pem (1078 bytes)
I0213 23:31:31.852334    1368 exec_runner.go:91] found C:\Users\liakaz\.minikube/cert.pem, removing ...
I0213 23:31:31.852334    1368 exec_runner.go:98] cp: C:\Users\liakaz\.minikube\certs\cert.pem --> C:\Users\liakaz\.minikube/cert.pem (1123 bytes)
I0213 23:31:31.866310    1368 exec_runner.go:91] found C:\Users\liakaz\.minikube/key.pem, removing ...
I0213 23:31:31.867310    1368 exec_runner.go:98] cp: C:\Users\liakaz\.minikube\certs\key.pem --> C:\Users\liakaz\.minikube/key.pem (1679 bytes)
I0213 23:31:31.869310    1368 provision.go:105] generating server cert: C:\Users\liakaz\.minikube\machines\server.pem ca-key=C:\Users\liakaz\.minikube\certs\ca.pem private-key=C:\Users\liakaz\.minikube\certs\ca-key.pem org=liakaz.minikube san=[192.168.49.2 localhost 127.0.0.1 minikube minikube]
I0213 23:31:32.055453    1368 provision.go:159] copyRemoteCerts
I0213 23:31:32.087494    1368 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0213 23:31:32.097959    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0213 23:31:32.433958    1368 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\liakaz\.minikube\machines\minikube\id_rsa Username:docker}
I0213 23:31:32.542931    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0213 23:31:32.583925    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\machines\server.pem --> /etc/docker/server.pem (1192 bytes)
I0213 23:31:32.613926    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0213 23:31:32.641017    1368 provision.go:85] duration metric: configureAuth took 1.1787154s
I0213 23:31:32.641017    1368 ubuntu.go:190] setting minikube options for container-runtime
I0213 23:31:32.661014    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0213 23:31:33.027066    1368 main.go:118] libmachine: Using SSH client type: native
I0213 23:31:33.028100    1368 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
I0213 23:31:33.029067    1368 main.go:118] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0213 23:31:33.207754    1368 main.go:118] libmachine: SSH cmd err, output: <nil>: overlay

I0213 23:31:33.208755    1368 ubuntu.go:71] root file system type: overlay
I0213 23:31:33.210753    1368 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0213 23:31:33.224753    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0213 23:31:33.565784    1368 main.go:118] libmachine: Using SSH client type: native
I0213 23:31:33.565784    1368 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
I0213 23:31:33.566786    1368 main.go:118] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0213 23:31:33.741757    1368 main.go:118] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0213 23:31:33.755781    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0213 23:31:34.113486    1368 main.go:118] libmachine: Using SSH client type: native
I0213 23:31:34.114487    1368 main.go:118] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b7420] 0x7b73f0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
I0213 23:31:34.115487    1368 main.go:118] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0213 23:31:35.598620    1368 main.go:118] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service       2020-03-10 19:42:48.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2021-02-13 04:58:44.964632995 +0000
@@ -8,24 +8,22 @@

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0213 23:31:35.600615    1368 machine.go:91] provisioned docker machine in 5.4086074s
I0213 23:31:35.606616    1368 client.go:168] LocalClient.Create took 1m6.5995713s
I0213 23:31:35.607616    1368 start.go:172] duration metric: libmachine.API.Create for "minikube" took 1m6.6015704s
I0213 23:31:35.608626    1368 start.go:268] post-start starting for "minikube" (driver="docker")
I0213 23:31:35.609086    1368 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0213 23:31:35.635618    1368 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0213 23:31:35.645617    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0213 23:31:36.001010    1368 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\liakaz\.minikube\machines\minikube\id_rsa Username:docker}
I0213 23:31:36.126981    1368 ssh_runner.go:148] Run: cat /etc/os-release
I0213 23:31:36.134982    1368 main.go:118] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0213 23:31:36.135978    1368 main.go:118] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0213 23:31:36.136978    1368 main.go:118] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0213 23:31:36.136978    1368 info.go:97] Remote host: Ubuntu 20.04 LTS
I0213 23:31:36.137979    1368 filesync.go:118] Scanning C:\Users\liakaz\.minikube\addons for local assets ...
I0213 23:31:36.138980    1368 filesync.go:118] Scanning C:\Users\liakaz\.minikube\files for local assets ...
I0213 23:31:36.139983    1368 start.go:271] post-start completed in 530.8988ms
I0213 23:31:36.192982    1368 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0213 23:31:36.564980    1368 profile.go:150] Saving config to C:\Users\liakaz\.minikube\profiles\minikube\config.json ...
I0213 23:31:36.607003    1368 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0213 23:31:36.623977    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0213 23:31:36.967121    1368 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\liakaz\.minikube\machines\minikube\id_rsa Username:docker}
I0213 23:31:37.080010    1368 start.go:130] duration metric: createHost completed in 1m8.1269611s
I0213 23:31:37.080010    1368 start.go:81] releasing machines lock for "minikube", held for 1m8.128968s
I0213 23:31:37.092011    1368 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0213 23:31:37.469560    1368 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0213 23:31:37.487562    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0213 23:31:37.507566    1368 ssh_runner.go:148] Run: systemctl --version
I0213 23:31:37.521556    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0213 23:31:37.843590    1368 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\liakaz\.minikube\machines\minikube\id_rsa Username:docker}
I0213 23:31:37.883590    1368 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\liakaz\.minikube\machines\minikube\id_rsa Username:docker}
I0213 23:31:38.164565    1368 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0213 23:31:38.201556    1368 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0213 23:31:38.219555    1368 cruntime.go:193] skipping containerd shutdown because we are bound to it
I0213 23:31:38.250558    1368 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0213 23:31:38.294557    1368 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0213 23:31:38.333145    1368 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0213 23:31:38.451061    1368 ssh_runner.go:148] Run: sudo systemctl start docker
I0213 23:31:38.481060    1368 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
I0213 23:31:38.814492    1368 out.go:109] * Preparing Kubernetes v1.19.0 on Docker 19.03.8 ...
* Preparing Kubernetes v1.19.0 on Docker 19.03.8 ...
I0213 23:31:38.826455    1368 cli_runner.go:110] Run: docker exec -t minikube dig +short host.docker.internal
I0213 23:31:39.373451    1368 network.go:67] got host ip for mount in container by digging dns: 192.168.65.2
I0213 23:31:39.399451    1368 ssh_runner.go:148] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0213 23:31:39.407451    1368 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2        host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0213 23:31:39.454454    1368 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0213 23:31:39.795481    1368 preload.go:97] Checking if preload exists for k8s version v1.19.0 and runtime docker
I0213 23:31:39.796450    1368 preload.go:105] Found local preload: C:\Users\liakaz\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v6-v1.19.0-docker-overlay2-amd64.tar.lz4
I0213 23:31:39.818449    1368 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0213 23:31:39.888496    1368 docker.go:381] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/kube-proxy:v1.19.0
k8s.gcr.io/kube-apiserver:v1.19.0
k8s.gcr.io/kube-controller-manager:v1.19.0
k8s.gcr.io/kube-scheduler:v1.19.0
k8s.gcr.io/etcd:3.4.9-1
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0213 23:31:39.889496    1368 docker.go:319] Images already preloaded, skipping extraction
I0213 23:31:39.906493    1368 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0213 23:31:39.968496    1368 docker.go:381] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/kube-proxy:v1.19.0
k8s.gcr.io/kube-apiserver:v1.19.0
k8s.gcr.io/kube-controller-manager:v1.19.0
k8s.gcr.io/kube-scheduler:v1.19.0
k8s.gcr.io/etcd:3.4.9-1
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0213 23:31:39.968496    1368 cache_images.go:74] Images are preloaded, skipping loading
I0213 23:31:39.986525    1368 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0213 23:31:40.058467    1368 cni.go:74] Creating CNI manager for ""
I0213 23:31:40.058467    1368 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0213 23:31:40.061511    1368 kubeadm.go:84] Using pod CIDR:
I0213 23:31:40.068460    1368 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.19.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0213 23:31:40.069461    1368 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.19.0
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 192.168.49.2:10249

I0213 23:31:40.071467    1368 kubeadm.go:805] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.19.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.19.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0213 23:31:40.097485    1368 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.0
I0213 23:31:40.111464    1368 binaries.go:43] Found k8s binaries, skipping transfer
I0213 23:31:40.135493    1368 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0213 23:31:40.147467    1368 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0213 23:31:40.169461    1368 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0213 23:31:40.190494    1368 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1787 bytes)
I0213 23:31:40.243465    1368 ssh_runner.go:148] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0213 23:31:40.251470    1368 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2       control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0213 23:31:40.275462    1368 certs.go:52] Setting up C:\Users\liakaz\.minikube\profiles\minikube for IP: 192.168.49.2
I0213 23:31:40.295461    1368 certs.go:169] skipping minikubeCA CA generation: C:\Users\liakaz\.minikube\ca.key
I0213 23:31:40.318465    1368 certs.go:169] skipping proxyClientCA CA generation: C:\Users\liakaz\.minikube\proxy-client-ca.key
I0213 23:31:40.319470    1368 certs.go:273] generating minikube-user signed cert: C:\Users\liakaz\.minikube\profiles\minikube\client.key
I0213 23:31:40.320462    1368 crypto.go:69] Generating cert C:\Users\liakaz\.minikube\profiles\minikube\client.crt with IP's: []
I0213 23:31:40.645162    1368 crypto.go:157] Writing cert to C:\Users\liakaz\.minikube\profiles\minikube\client.crt ...
I0213 23:31:40.645162    1368 lock.go:35] WriteFile acquiring C:\Users\liakaz\.minikube\profiles\minikube\client.crt: {Name:mk6a35f8003dda735ec6e2cc09b1b5ddb6ff0551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 23:31:40.657116    1368 crypto.go:165] Writing key to C:\Users\liakaz\.minikube\profiles\minikube\client.key ...
I0213 23:31:40.657116    1368 lock.go:35] WriteFile acquiring C:\Users\liakaz\.minikube\profiles\minikube\client.key: {Name:mk5ff8121e44623c6b6f2785048704a0ae598a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 23:31:40.668116    1368 certs.go:273] generating minikube signed cert: C:\Users\liakaz\.minikube\profiles\minikube\apiserver.key.dd3b5fb2
I0213 23:31:40.668116    1368 crypto.go:69] Generating cert C:\Users\liakaz\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0213 23:31:40.787113    1368 crypto.go:157] Writing cert to C:\Users\liakaz\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 ...
I0213 23:31:40.787113    1368 lock.go:35] WriteFile acquiring C:\Users\liakaz\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2: {Name:mka0d6618400069965614997d53e4272d8164a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 23:31:40.799117    1368 crypto.go:165] Writing key to C:\Users\liakaz\.minikube\profiles\minikube\apiserver.key.dd3b5fb2 ...
I0213 23:31:40.799117    1368 lock.go:35] WriteFile acquiring C:\Users\liakaz\.minikube\profiles\minikube\apiserver.key.dd3b5fb2: {Name:mk22e40737fc7cb65b356a4b70e421bdae83dc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 23:31:40.819116    1368 certs.go:284] copying C:\Users\liakaz\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 -> C:\Users\liakaz\.minikube\profiles\minikube\apiserver.crt
I0213 23:31:40.822119    1368 certs.go:288] copying C:\Users\liakaz\.minikube\profiles\minikube\apiserver.key.dd3b5fb2 -> C:\Users\liakaz\.minikube\profiles\minikube\apiserver.key
I0213 23:31:40.824115    1368 certs.go:273] generating aggregator signed cert: C:\Users\liakaz\.minikube\profiles\minikube\proxy-client.key
I0213 23:31:40.825115    1368 crypto.go:69] Generating cert C:\Users\liakaz\.minikube\profiles\minikube\proxy-client.crt with IP's: []
I0213 23:31:40.940149    1368 crypto.go:157] Writing cert to C:\Users\liakaz\.minikube\profiles\minikube\proxy-client.crt ...
I0213 23:31:40.940149    1368 lock.go:35] WriteFile acquiring C:\Users\liakaz\.minikube\profiles\minikube\proxy-client.crt: {Name:mk4ba596a05be892cbbbebbeee987f73782c9f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 23:31:40.952114    1368 crypto.go:165] Writing key to C:\Users\liakaz\.minikube\profiles\minikube\proxy-client.key ...
I0213 23:31:40.952114    1368 lock.go:35] WriteFile acquiring C:\Users\liakaz\.minikube\profiles\minikube\proxy-client.key: {Name:mk660fe463aa74977ba4635514bb3a8a27113f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 23:31:40.964119    1368 certs.go:348] found cert: C:\Users\liakaz\.minikube\certs\C:\Users\liakaz\.minikube\certs\ca-key.pem (1679 bytes)
I0213 23:31:40.965117    1368 certs.go:348] found cert: C:\Users\liakaz\.minikube\certs\C:\Users\liakaz\.minikube\certs\ca.pem (1078 bytes)
I0213 23:31:40.966132    1368 certs.go:348] found cert: C:\Users\liakaz\.minikube\certs\C:\Users\liakaz\.minikube\certs\cert.pem (1123 bytes)
I0213 23:31:40.968116    1368 certs.go:348] found cert: C:\Users\liakaz\.minikube\certs\C:\Users\liakaz\.minikube\certs\key.pem (1679 bytes)
I0213 23:31:40.971121    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0213 23:31:41.003124    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0213 23:31:41.031119    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0213 23:31:41.057115    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0213 23:31:41.085146    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0213 23:31:41.114253    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0213 23:31:41.143283    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0213 23:31:41.174251    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0213 23:31:41.208254    1368 ssh_runner.go:215] scp C:\Users\liakaz\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0213 23:31:41.250251    1368 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0213 23:31:41.318250    1368 ssh_runner.go:148] Run: openssl version
I0213 23:31:41.364251    1368 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0213 23:31:41.401248    1368 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0213 23:31:41.409253    1368 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Feb 14  2021 /usr/share/ca-certificates/minikubeCA.pem
I0213 23:31:41.439249    1368 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0213 23:31:41.472249    1368 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0213 23:31:41.486250    1368 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:1991 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
I0213 23:31:41.497252    1368 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0213 23:31:41.577253    1368 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0213 23:31:41.615106    1368 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0213 23:31:41.626103    1368 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0213 23:31:41.650107    1368 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0213 23:31:41.663106    1368 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0213 23:31:41.663106    1368 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0213 23:31:43.517816    1368 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (1.8496826s)
W0213 23:31:43.518812    1368 out.go:145] ! initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"

stderr:
W0213 04:58:53.244092     713 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase certs/ca: failure loading ca certificate: failed to load certificate: the certificate is not valid yet
To see the stack trace of this error execute with --v=5 or higher

! initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"

stderr:
W0213 04:58:53.244092     713 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase certs/ca: failure loading ca certificate: failed to load certificate: the certificate is not valid yet
To see the stack trace of this error execute with --v=5 or higher

I0213 23:31:43.521815    1368 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm reset --cri-socket npipe:////./pipe/docker_engine --force"
I0213 23:31:43.634853    1368 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0213 23:31:43.662844    1368 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0213 23:31:43.717451    1368 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0213 23:31:43.745487    1368 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0213 23:31:43.757452    1368 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0213 23:31:43.758452    1368 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0213 23:31:44.823984    1368 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (1.0655352s)
I0213 23:31:44.823984    1368 kubeadm.go:326] StartCluster complete in 3.3382777s
I0213 23:31:44.843985    1368 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 23:31:44.898981    1368 logs.go:206] 0 containers: []
W0213 23:31:44.898981    1368 logs.go:208] No container was found matching "kube-apiserver"
I0213 23:31:44.917322    1368 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 23:31:44.975327    1368 logs.go:206] 0 containers: []
W0213 23:31:44.975327    1368 logs.go:208] No container was found matching "etcd"
I0213 23:31:44.988319    1368 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 23:31:45.042228    1368 logs.go:206] 0 containers: []
W0213 23:31:45.042228    1368 logs.go:208] No container was found matching "coredns"
I0213 23:31:45.055230    1368 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 23:31:45.121229    1368 logs.go:206] 0 containers: []
W0213 23:31:45.121229    1368 logs.go:208] No container was found matching "kube-scheduler"
I0213 23:31:45.137227    1368 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 23:31:45.211234    1368 logs.go:206] 0 containers: []
W0213 23:31:45.211234    1368 logs.go:208] No container was found matching "kube-proxy"
I0213 23:31:45.227228    1368 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 23:31:45.292233    1368 logs.go:206] 0 containers: []
W0213 23:31:45.292233    1368 logs.go:208] No container was found matching "kubernetes-dashboard"
I0213 23:31:45.306226    1368 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 23:31:45.367232    1368 logs.go:206] 0 containers: []
W0213 23:31:45.367232    1368 logs.go:208] No container was found matching "storage-provisioner"
I0213 23:31:45.382229    1368 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 23:31:45.444227    1368 logs.go:206] 0 containers: []
W0213 23:31:45.444227    1368 logs.go:208] No container was found matching "kube-controller-manager"
I0213 23:31:45.446230    1368 logs.go:120] Gathering logs for kubelet ...
I0213 23:31:45.451232    1368 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 23:31:45.479755    1368 logs.go:120] Gathering logs for dmesg ...
I0213 23:31:45.480759    1368 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 23:31:45.509753    1368 logs.go:120] Gathering logs for describe nodes ...
I0213 23:31:45.509753    1368 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0213 23:31:45.793754    1368 logs.go:127] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0213 23:31:45.793754    1368 logs.go:120] Gathering logs for Docker ...
I0213 23:31:45.803756    1368 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0213 23:31:45.826755    1368 logs.go:120] Gathering logs for container status ...
I0213 23:31:45.826755    1368 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 23:31:47.912174    1368 ssh_runner.go:188] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0844259s)
W0213 23:31:47.913172    1368 out.go:257] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"

stderr:
W0213 04:58:55.124200     871 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase certs/ca: failure loading ca certificate: failed to load certificate: the certificate is not valid yet
To see the stack trace of this error execute with --v=5 or higher
W0213 23:31:47.915170    1368 out.go:145] *
*
W0213 23:31:47.917170    1368 out.go:145] X Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"

stderr:
W0213 04:58:55.124200     871 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase certs/ca: failure loading ca certificate: failed to load certificate: the certificate is not valid yet
To see the stack trace of this error execute with --v=5 or higher

X Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"

stderr:
W0213 04:58:55.124200     871 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase certs/ca: failure loading ca certificate: failed to load certificate: the certificate is not valid yet
To see the stack trace of this error execute with --v=5 or higher

W0213 23:31:47.919170    1368 out.go:145] *
*
W0213 23:31:47.920169    1368 out.go:145] * minikube is exiting due to an error. If the above message is not useful, open an issue:
* minikube is exiting due to an error. If the above message is not useful, open an issue:
W0213 23:31:47.921169    1368 out.go:145]   - https://github.com/kubernetes/minikube/issues/new/choose
  - https://github.com/kubernetes/minikube/issues/new/choose
I0213 23:31:47.968168    1368 out.go:109]

W0213 23:31:47.970168    1368 out.go:145] X Exiting due to GUEST_START: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"

stderr:
W0213 04:58:55.124200     871 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase certs/ca: failure loading ca certificate: failed to load certificate: the certificate is not valid yet
To see the stack trace of this error execute with --v=5 or higher

X Exiting due to GUEST_START: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"

stderr:
W0213 04:58:55.124200     871 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase certs/ca: failure loading ca certificate: failed to load certificate: the certificate is not valid yet
To see the stack trace of this error execute with --v=5 or higher

W0213 23:31:47.972170    1368 out.go:145] *
*
W0213 23:31:47.973174    1368 out.go:145] * If the above advice does not help, please let us know:
* If the above advice does not help, please let us know:
W0213 23:31:47.975173    1368 out.go:145]   - https://github.com/kubernetes/minikube/issues/new/choose
  - https://github.com/kubernetes/minikube/issues/new/choose
I0213 23:31:48.029174    1368 out.go:109]

minikube status
E0213 23:27:23.407172 9188 status.go:364] kubeconfig endpoint: extract IP: "minikube" does not appear in C:\Users\liakaz/.kube/config
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured

WARNING: Your kubectl is pointing to stale minikube-vm.
*

Optional: Full output of minikube logs command:

@priyawadhwa priyawadhwa added the kind/support Categorizes issue or PR as a support question. label Feb 25, 2021
@priyawadhwa
Copy link

Hey @liakaz thanks for opening this issue. Looks like your kubeconfig has somehow misconfigured:

$ minikube status
E0213 23:27:23.407172 9188 status.go:364] kubeconfig endpoint: extract IP: "minikube" does not appear in C:\Users\liakaz/.kube/config
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured

WARNING: Your kubectl is pointing to stale minikube-vm.

Could you try running:

minikube update-context

If that doesn't fix it, I'd recommend deleting the cluster and recreating it:

minikube delete
minikube start

Please comment here with the results!

@priyawadhwa priyawadhwa added the triage/needs-information Indicates an issue needs more information in order to work on it. label Feb 25, 2021
@sharifelgamal
Copy link
Collaborator

Hey @liakaz, did the above suggestion help at all?

@spowelljr
Copy link
Member

Hi @liakaz, we haven't heard back from you, do you still have this issue?
There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but feel free to reopen when you feel ready to provide more information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

4 participants