$ minikube start --force-systemd --driver docker --v=5 --alsologtostderr I1109 16:12:06.247575 54698 out.go:192] Setting JSON to false I1109 16:12:06.249247 54698 start.go:103] hostinfo: {"hostname":"pulsedev-VirtualBox","uptime":902,"bootTime":1604955424,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-52-generic","virtualizationSystem":"vbox","virtualizationRole":"guest","hostid":"626e746c-5c8d-47a5-8e18-79688898e9d3"} I1109 16:12:06.249681 54698 start.go:113] virtualization: vbox guest I1109 16:12:06.259759 54698 out.go:110] ๐Ÿ˜„ minikube v1.14.1 on Ubuntu 20.04 (vbox/amd64) ๐Ÿ˜„ minikube v1.14.1 on Ubuntu 20.04 (vbox/amd64) I1109 16:12:06.260231 54698 driver.go:288] Setting default libvirt URI to qemu:///system I1109 16:12:06.340901 54698 docker.go:117] docker version: linux-19.03.13 I1109 16:12:06.342378 54698 cli_runner.go:110] Run: docker system info --format "{{json .}}" I1109 16:12:06.495677 54698 info.go:253] docker info: {ID:5T6C:OLAT:433S:5RIO:DSAL:N4HE:6BCN:OPJO:3H7L:RJZP:JTBX:5QYG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2020-11-09 16:12:06.396842095 -0500 EST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.4.0-52-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8348520448 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:pulsedev-VirtualBox Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I1109 16:12:06.495803 54698 docker.go:147] overlay module found I1109 16:12:06.502754 54698 out.go:110] โœจ Using the docker driver based on user configuration โœจ Using the docker driver based on user configuration I1109 16:12:06.502873 54698 start.go:272] selected driver: docker I1109 16:12:06.502878 54698 start.go:680] validating driver "docker" against I1109 16:12:06.502892 54698 start.go:691] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:} I1109 16:12:06.502952 54698 cli_runner.go:110] Run: docker system info --format "{{json .}}" I1109 16:12:06.641105 54698 info.go:253] docker info: {ID:5T6C:OLAT:433S:5RIO:DSAL:N4HE:6BCN:OPJO:3H7L:RJZP:JTBX:5QYG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2020-11-09 16:12:06.55960117 -0500 EST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.4.0-52-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8348520448 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:pulsedev-VirtualBox Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I1109 16:12:06.641269 54698 start_flags.go:228] no existing cluster config was found, will generate one from the flags I1109 16:12:06.641502 54698 start_flags.go:246] Using suggested 2200MB memory alloc based on sys=7961MB, container=7961MB I1109 16:12:06.641664 54698 start_flags.go:631] Wait components to verify : map[apiserver:true system_pods:true] I1109 16:12:06.641683 54698 cni.go:74] Creating CNI manager for "" I1109 16:12:06.641688 54698 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I1109 16:12:06.641693 54698 start_flags.go:358] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]} I1109 16:12:06.645925 54698 out.go:110] ๐Ÿ‘ Starting control plane node minikube in cluster minikube ๐Ÿ‘ Starting control plane node minikube in cluster minikube I1109 16:12:06.698151 54698 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f in local docker daemon, skipping pull I1109 16:12:06.698234 54698 cache.go:115] gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f exists in daemon, skipping pull I1109 16:12:06.698247 54698 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker I1109 16:12:06.698274 54698 preload.go:105] Found local preload: /home/rboal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 I1109 16:12:06.698278 54698 cache.go:53] Caching tarball of preloaded images I1109 16:12:06.698288 54698 preload.go:131] Found /home/rboal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download I1109 16:12:06.698293 54698 cache.go:56] Finished verifying existence of preloaded tar for v1.19.2 on docker I1109 16:12:06.698449 54698 profile.go:150] Saving config to /home/rboal/.minikube/profiles/minikube/config.json ... I1109 16:12:06.698612 54698 lock.go:36] WriteFile acquiring /home/rboal/.minikube/profiles/minikube/config.json: {Name:mk44262389a26996800ef2c8a51c0c7040735663 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1109 16:12:06.698937 54698 cache.go:182] Successfully downloaded all kic artifacts I1109 16:12:06.699112 54698 start.go:314] acquiring machines lock for minikube: {Name:mk1d26977de1a4dab477e4fd21877588e7f39290 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1109 16:12:06.699967 54698 start.go:318] acquired machines lock for "minikube" in 783.674ยตs I1109 16:12:06.700100 54698 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]} &{Name: IP: Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true} I1109 16:12:06.700204 54698 start.go:127] createHost starting for "" (driver="docker") I1109 16:12:06.703977 54698 out.go:110] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=2200MB) ... ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=2200MB) ... I1109 16:12:06.704563 54698 start.go:164] libmachine.API.Create for "minikube" (driver="docker") I1109 16:12:06.704708 54698 client.go:165] LocalClient.Create starting I1109 16:12:06.704865 54698 main.go:119] libmachine: Reading certificate data from /home/rboal/.minikube/certs/ca.pem I1109 16:12:06.704951 54698 main.go:119] libmachine: Decoding PEM data... I1109 16:12:06.704965 54698 main.go:119] libmachine: Parsing certificate... I1109 16:12:06.705065 54698 main.go:119] libmachine: Reading certificate data from /home/rboal/.minikube/certs/cert.pem I1109 16:12:06.705173 54698 main.go:119] libmachine: Decoding PEM data... I1109 16:12:06.705184 54698 main.go:119] libmachine: Parsing certificate... I1109 16:12:06.705451 54698 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" W1109 16:12:06.772895 54698 cli_runner.go:148] docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" returned with exit code 1 I1109 16:12:06.773320 54698 network_create.go:178] running [docker network inspect minikube] to gather additional debugging logs... I1109 16:12:06.773339 54698 cli_runner.go:110] Run: docker network inspect minikube W1109 16:12:06.837139 54698 cli_runner.go:148] docker network inspect minikube returned with exit code 1 I1109 16:12:06.837216 54698 network_create.go:181] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I1109 16:12:06.837233 54698 network_create.go:183] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I1109 16:12:06.837284 54698 cli_runner.go:110] Run: docker network inspect bridge --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" I1109 16:12:06.902305 54698 network_create.go:96] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ... I1109 16:12:06.902486 54698 cli_runner.go:110] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube -o com.docker.network.driver.mtu=1500 I1109 16:12:07.079230 54698 kic.go:93] calculated static IP "192.168.49.2" for the "minikube" container I1109 16:12:07.079484 54698 cli_runner.go:110] Run: docker ps -a --format {{.Names}} I1109 16:12:07.145068 54698 cli_runner.go:110] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I1109 16:12:07.209515 54698 oci.go:102] Successfully created a docker volume minikube I1109 16:12:07.209746 54698 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -d /var/lib I1109 16:12:08.710686 54698 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -d /var/lib: (1.500818465s) I1109 16:12:08.710823 54698 oci.go:106] Successfully prepared a docker volume minikube W1109 16:12:08.710855 54698 oci.go:153] Your kernel does not support swap limit capabilities or the cgroup is not mounted. I1109 16:12:08.710896 54698 cli_runner.go:110] Run: docker info --format "'{{json .SecurityOptions}}'" I1109 16:12:08.710870 54698 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker I1109 16:12:08.711123 54698 preload.go:105] Found local preload: /home/rboal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 I1109 16:12:08.711128 54698 kic.go:148] Starting extracting preloaded images to volume ... I1109 16:12:08.711163 54698 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/rboal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -I lz4 -xvf /preloaded.tar -C /extractDir I1109 16:12:08.918818 54698 cli_runner.go:110] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f I1109 16:12:09.920121 54698 cli_runner.go:154] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f: (1.001180196s) I1109 16:12:09.920360 54698 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Running}} I1109 16:12:10.016802 54698 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1109 16:12:10.078573 54698 cli_runner.go:110] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I1109 16:12:10.350282 54698 oci.go:245] the created container "minikube" has a running status. I1109 16:12:10.350522 54698 kic.go:179] Creating ssh key for kic: /home/rboal/.minikube/machines/minikube/id_rsa... I1109 16:12:10.584239 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys I1109 16:12:10.584408 54698 kic_runner.go:179] docker (temp): /home/rboal/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I1109 16:12:10.833341 54698 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1109 16:12:10.901850 54698 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I1109 16:12:10.901864 54698 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I1109 16:12:19.458673 54698 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/rboal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -I lz4 -xvf /preloaded.tar -C /extractDir: (10.747483389s) I1109 16:12:19.458745 54698 kic.go:157] duration metric: took 10.747614 seconds to extract preloaded images to volume I1109 16:12:19.458795 54698 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1109 16:12:19.521925 54698 machine.go:88] provisioning docker machine ... I1109 16:12:19.521954 54698 ubuntu.go:166] provisioning hostname "minikube" I1109 16:12:19.522001 54698 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1109 16:12:19.589842 54698 main.go:119] libmachine: Using SSH client type: native I1109 16:12:19.590138 54698 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 [] 0s} 127.0.0.1 32779 } I1109 16:12:19.590204 54698 main.go:119] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I1109 16:12:19.789509 54698 main.go:119] libmachine: SSH cmd err, output: : minikube I1109 16:12:19.789645 54698 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1109 16:12:19.846394 54698 main.go:119] libmachine: Using SSH client type: native I1109 16:12:19.846571 54698 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 [] 0s} 127.0.0.1 32779 } I1109 16:12:19.846585 54698 main.go:119] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I1109 16:12:20.026322 54698 main.go:119] libmachine: SSH cmd err, output: : I1109 16:12:20.026452 54698 ubuntu.go:172] set auth options {CertDir:/home/rboal/.minikube CaCertPath:/home/rboal/.minikube/certs/ca.pem CaPrivateKeyPath:/home/rboal/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/rboal/.minikube/machines/server.pem ServerKeyPath:/home/rboal/.minikube/machines/server-key.pem ClientKeyPath:/home/rboal/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/rboal/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/rboal/.minikube} I1109 16:12:20.026557 54698 ubuntu.go:174] setting up certificates I1109 16:12:20.026577 54698 provision.go:82] configureAuth start I1109 16:12:20.026630 54698 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1109 16:12:20.106403 54698 provision.go:131] copyHostCerts I1109 16:12:20.106432 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/certs/ca.pem -> /home/rboal/.minikube/ca.pem I1109 16:12:20.106486 54698 exec_runner.go:91] found /home/rboal/.minikube/ca.pem, removing ... I1109 16:12:20.111658 54698 exec_runner.go:98] cp: /home/rboal/.minikube/certs/ca.pem --> /home/rboal/.minikube/ca.pem (1074 bytes) I1109 16:12:20.111738 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/certs/cert.pem -> /home/rboal/.minikube/cert.pem I1109 16:12:20.111758 54698 exec_runner.go:91] found /home/rboal/.minikube/cert.pem, removing ... I1109 16:12:20.111783 54698 exec_runner.go:98] cp: /home/rboal/.minikube/certs/cert.pem --> /home/rboal/.minikube/cert.pem (1119 bytes) I1109 16:12:20.111820 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/certs/key.pem -> /home/rboal/.minikube/key.pem I1109 16:12:20.111838 54698 exec_runner.go:91] found /home/rboal/.minikube/key.pem, removing ... I1109 16:12:20.111863 54698 exec_runner.go:98] cp: /home/rboal/.minikube/certs/key.pem --> /home/rboal/.minikube/key.pem (1679 bytes) I1109 16:12:20.111900 54698 provision.go:105] generating server cert: /home/rboal/.minikube/machines/server.pem ca-key=/home/rboal/.minikube/certs/ca.pem private-key=/home/rboal/.minikube/certs/ca-key.pem org=rboal.minikube san=[192.168.49.2 localhost 127.0.0.1 minikube minikube] I1109 16:12:20.367324 54698 provision.go:159] copyRemoteCerts I1109 16:12:20.367767 54698 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I1109 16:12:20.368246 54698 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1109 16:12:20.437885 54698 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/rboal/.minikube/machines/minikube/id_rsa Username:docker} I1109 16:12:20.549238 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/machines/server.pem -> /etc/docker/server.pem I1109 16:12:20.549284 54698 ssh_runner.go:215] scp /home/rboal/.minikube/machines/server.pem --> /etc/docker/server.pem (1188 bytes) I1109 16:12:20.572456 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem I1109 16:12:20.572502 54698 ssh_runner.go:215] scp /home/rboal/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I1109 16:12:20.603866 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/certs/ca.pem -> /etc/docker/ca.pem I1109 16:12:20.604032 54698 ssh_runner.go:215] scp /home/rboal/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes) I1109 16:12:20.638191 54698 provision.go:85] duration metric: configureAuth took 611.581533ms I1109 16:12:20.638713 54698 ubuntu.go:190] setting minikube options for container-runtime I1109 16:12:20.638909 54698 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1109 16:12:20.696037 54698 main.go:119] libmachine: Using SSH client type: native I1109 16:12:20.696244 54698 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 [] 0s} 127.0.0.1 32779 } I1109 16:12:20.696323 54698 main.go:119] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I1109 16:12:20.862842 54698 main.go:119] libmachine: SSH cmd err, output: : overlay I1109 16:12:20.862913 54698 ubuntu.go:71] root file system type: overlay I1109 16:12:20.863343 54698 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ... I1109 16:12:20.863493 54698 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1109 16:12:20.933102 54698 main.go:119] libmachine: Using SSH client type: native I1109 16:12:20.933270 54698 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 [] 0s} 127.0.0.1 32779 } I1109 16:12:20.933383 54698 main.go:119] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I1109 16:12:21.117450 54698 main.go:119] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I1109 16:12:21.117761 54698 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1109 16:12:21.181253 54698 main.go:119] libmachine: Using SSH client type: native I1109 16:12:21.181483 54698 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 [] 0s} 127.0.0.1 32779 } I1109 16:12:21.181570 54698 main.go:119] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I1109 16:12:22.105566 54698 main.go:119] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2020-03-10 19:42:48.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2020-11-09 21:12:21.110081483 +0000 @@ -8,24 +8,22 @@ [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 - -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -33,9 +31,10 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes I1109 16:12:22.105648 54698 machine.go:91] provisioned docker machine in 2.583706251s I1109 16:12:22.105658 54698 client.go:168] LocalClient.Create took 15.400835657s I1109 16:12:22.105667 54698 start.go:172] duration metric: libmachine.API.Create for "minikube" took 15.401105223s I1109 16:12:22.105673 54698 start.go:268] post-start starting for "minikube" (driver="docker") I1109 16:12:22.105678 54698 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I1109 16:12:22.105778 54698 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I1109 16:12:22.105803 54698 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1109 16:12:22.170384 54698 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/rboal/.minikube/machines/minikube/id_rsa Username:docker} I1109 16:12:22.277897 54698 ssh_runner.go:148] Run: cat /etc/os-release I1109 16:12:22.282466 54698 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I1109 16:12:22.282496 54698 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I1109 16:12:22.282508 54698 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I1109 16:12:22.282516 54698 info.go:97] Remote host: Ubuntu 20.04 LTS I1109 16:12:22.282527 54698 filesync.go:118] Scanning /home/rboal/.minikube/addons for local assets ... I1109 16:12:22.286105 54698 filesync.go:118] Scanning /home/rboal/.minikube/files for local assets ... I1109 16:12:22.287229 54698 start.go:271] post-start completed in 181.547707ms I1109 16:12:22.287612 54698 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1109 16:12:22.358748 54698 profile.go:150] Saving config to /home/rboal/.minikube/profiles/minikube/config.json ... I1109 16:12:22.361320 54698 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I1109 16:12:22.361408 54698 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1109 16:12:22.424670 54698 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/rboal/.minikube/machines/minikube/id_rsa Username:docker} I1109 16:12:22.526056 54698 start.go:130] duration metric: createHost completed in 15.825840551s I1109 16:12:22.526192 54698 start.go:81] releasing machines lock for "minikube", held for 15.82609864s I1109 16:12:22.526270 54698 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1109 16:12:22.585659 54698 ssh_runner.go:148] Run: systemctl --version I1109 16:12:22.585813 54698 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/ I1109 16:12:22.586245 54698 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1109 16:12:22.586203 54698 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1109 16:12:22.663903 54698 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/rboal/.minikube/machines/minikube/id_rsa Username:docker} I1109 16:12:22.674392 54698 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/rboal/.minikube/machines/minikube/id_rsa Username:docker} I1109 16:12:22.782402 54698 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd I1109 16:12:23.321824 54698 ssh_runner.go:148] Run: sudo systemctl cat docker.service I1109 16:12:23.337488 54698 cruntime.go:193] skipping containerd shutdown because we are bound to it I1109 16:12:23.337532 54698 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio I1109 16:12:23.354205 54698 docker.go:288] Forcing docker to use systemd as cgroup manager... I1109 16:12:23.354349 54698 ssh_runner.go:215] scp memory --> /etc/docker/daemon.json (143 bytes) I1109 16:12:23.378104 54698 ssh_runner.go:148] Run: sudo systemctl daemon-reload I1109 16:12:23.475052 54698 ssh_runner.go:148] Run: sudo systemctl restart docker I1109 16:12:24.065296 54698 ssh_runner.go:148] Run: docker version --format {{.Server.Version}} I1109 16:12:24.152132 54698 out.go:110] ๐Ÿณ Preparing Kubernetes v1.19.2 on Docker 19.03.8 ... ๐Ÿณ Preparing Kubernetes v1.19.2 on Docker 19.03.8 ... I1109 16:12:24.152241 54698 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" I1109 16:12:24.213647 54698 ssh_runner.go:148] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I1109 16:12:24.220948 54698 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I1109 16:12:24.233812 54698 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker I1109 16:12:24.234380 54698 preload.go:105] Found local preload: /home/rboal/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 I1109 16:12:24.234618 54698 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I1109 16:12:24.296352 54698 docker.go:381] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.19.2 k8s.gcr.io/kube-controller-manager:v1.19.2 k8s.gcr.io/kube-apiserver:v1.19.2 k8s.gcr.io/kube-scheduler:v1.19.2 gcr.io/k8s-minikube/storage-provisioner:v3 k8s.gcr.io/etcd:3.4.13-0 kubernetesui/dashboard:v2.0.3 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I1109 16:12:24.296892 54698 docker.go:319] Images already preloaded, skipping extraction I1109 16:12:24.296931 54698 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I1109 16:12:24.360852 54698 docker.go:381] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.19.2 k8s.gcr.io/kube-apiserver:v1.19.2 k8s.gcr.io/kube-controller-manager:v1.19.2 k8s.gcr.io/kube-scheduler:v1.19.2 gcr.io/k8s-minikube/storage-provisioner:v3 k8s.gcr.io/etcd:3.4.13-0 kubernetesui/dashboard:v2.0.3 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I1109 16:12:24.360946 54698 cache_images.go:74] Images are preloaded, skipping loading I1109 16:12:24.360990 54698 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}} I1109 16:12:24.447760 54698 cni.go:74] Creating CNI manager for "" I1109 16:12:24.450772 54698 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I1109 16:12:24.450840 54698 kubeadm.go:84] Using pod CIDR: I1109 16:12:24.450860 54698 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.19.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I1109 16:12:24.451029 54698 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.19.2 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "" metricsBindAddress: 192.168.49.2:10249 I1109 16:12:24.451459 54698 kubeadm.go:822] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.19.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I1109 16:12:24.451524 54698 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.2 I1109 16:12:24.461507 54698 binaries.go:44] Found k8s binaries, skipping transfer I1109 16:12:24.461605 54698 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I1109 16:12:24.471964 54698 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I1109 16:12:24.500247 54698 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes) I1109 16:12:24.520163 54698 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1786 bytes) I1109 16:12:24.548235 54698 ssh_runner.go:148] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I1109 16:12:24.555305 54698 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I1109 16:12:24.576531 54698 certs.go:52] Setting up /home/rboal/.minikube/profiles/minikube for IP: 192.168.49.2 I1109 16:12:24.576635 54698 certs.go:169] skipping minikubeCA CA generation: /home/rboal/.minikube/ca.key I1109 16:12:24.576649 54698 certs.go:169] skipping proxyClientCA CA generation: /home/rboal/.minikube/proxy-client-ca.key I1109 16:12:24.576685 54698 certs.go:273] generating minikube-user signed cert: /home/rboal/.minikube/profiles/minikube/client.key I1109 16:12:24.576744 54698 crypto.go:69] Generating cert /home/rboal/.minikube/profiles/minikube/client.crt with IP's: [] I1109 16:12:24.842844 54698 crypto.go:157] Writing cert to /home/rboal/.minikube/profiles/minikube/client.crt ... I1109 16:12:24.843310 54698 lock.go:36] WriteFile acquiring /home/rboal/.minikube/profiles/minikube/client.crt: {Name:mkd5880447b6571aaff02f703856768e7f159193 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1109 16:12:24.843936 54698 crypto.go:165] Writing key to /home/rboal/.minikube/profiles/minikube/client.key ... I1109 16:12:24.844020 54698 lock.go:36] WriteFile acquiring /home/rboal/.minikube/profiles/minikube/client.key: {Name:mkb5c518485a0491d56c08a3e626b4960167a5e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1109 16:12:24.844425 54698 certs.go:273] generating minikube signed cert: /home/rboal/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I1109 16:12:24.844496 54698 crypto.go:69] Generating cert /home/rboal/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I1109 16:12:25.066048 54698 crypto.go:157] Writing cert to /home/rboal/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I1109 16:12:25.066468 54698 lock.go:36] WriteFile acquiring /home/rboal/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkfe0ba1b778bdd2602b61c0f4de6df7e466f364 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1109 16:12:25.068499 54698 crypto.go:165] Writing key to /home/rboal/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I1109 16:12:25.068570 54698 lock.go:36] WriteFile acquiring /home/rboal/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkd60715eeaccdc16abb4c4971efe0b24a356eef Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1109 16:12:25.068642 54698 certs.go:284] copying /home/rboal/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/rboal/.minikube/profiles/minikube/apiserver.crt I1109 16:12:25.068692 54698 certs.go:288] copying /home/rboal/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/rboal/.minikube/profiles/minikube/apiserver.key I1109 16:12:25.068732 54698 certs.go:273] generating aggregator signed cert: /home/rboal/.minikube/profiles/minikube/proxy-client.key I1109 16:12:25.068797 54698 crypto.go:69] Generating cert /home/rboal/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I1109 16:12:25.247904 54698 crypto.go:157] Writing cert to /home/rboal/.minikube/profiles/minikube/proxy-client.crt ... I1109 16:12:25.247996 54698 lock.go:36] WriteFile acquiring /home/rboal/.minikube/profiles/minikube/proxy-client.crt: {Name:mk7a74539f967147e294d8d41f8fa100f82eebf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1109 16:12:25.248371 54698 crypto.go:165] Writing key to /home/rboal/.minikube/profiles/minikube/proxy-client.key ... I1109 16:12:25.248443 54698 lock.go:36] WriteFile acquiring /home/rboal/.minikube/profiles/minikube/proxy-client.key: {Name:mk1ab7b6b433b773ed18768d8b5ee8d9c9c479e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1109 16:12:25.248742 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt I1109 16:12:25.248763 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key I1109 16:12:25.248776 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt I1109 16:12:25.248788 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key I1109 16:12:25.248799 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt I1109 16:12:25.248811 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/ca.key -> /var/lib/minikube/certs/ca.key I1109 16:12:25.248824 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt I1109 16:12:25.248837 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key I1109 16:12:25.251294 54698 certs.go:348] found cert: /home/rboal/.minikube/certs/home/rboal/.minikube/certs/ca-key.pem (1675 bytes) I1109 16:12:25.251422 54698 certs.go:348] found cert: /home/rboal/.minikube/certs/home/rboal/.minikube/certs/ca.pem (1074 bytes) I1109 16:12:25.251451 54698 certs.go:348] found cert: /home/rboal/.minikube/certs/home/rboal/.minikube/certs/cert.pem (1119 bytes) I1109 16:12:25.251474 54698 certs.go:348] found cert: /home/rboal/.minikube/certs/home/rboal/.minikube/certs/key.pem (1679 bytes) I1109 16:12:25.251501 54698 vm_assets.go:96] NewFileAsset: /home/rboal/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem I1109 16:12:25.252267 54698 ssh_runner.go:215] scp /home/rboal/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I1109 16:12:25.277967 54698 ssh_runner.go:215] scp /home/rboal/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I1109 16:12:25.309102 54698 ssh_runner.go:215] scp /home/rboal/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I1109 16:12:25.346822 54698 ssh_runner.go:215] scp /home/rboal/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I1109 16:12:25.381368 54698 ssh_runner.go:215] scp /home/rboal/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I1109 16:12:25.411658 54698 ssh_runner.go:215] scp /home/rboal/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I1109 16:12:25.441388 54698 ssh_runner.go:215] scp /home/rboal/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I1109 16:12:25.472164 54698 ssh_runner.go:215] scp /home/rboal/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I1109 16:12:25.506365 54698 ssh_runner.go:215] scp /home/rboal/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I1109 16:12:25.541051 54698 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes) I1109 16:12:25.557660 54698 ssh_runner.go:148] Run: openssl version I1109 16:12:25.565872 54698 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I1109 16:12:25.583874 54698 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I1109 16:12:25.589975 54698 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Oct 26 21:44 /usr/share/ca-certificates/minikubeCA.pem I1109 16:12:25.590061 54698 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I1109 16:12:25.596083 54698 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I1109 16:12:25.605830 54698 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]} I1109 16:12:25.605963 54698 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I1109 16:12:25.672049 54698 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I1109 16:12:25.680995 54698 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I1109 16:12:25.690729 54698 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver I1109 16:12:25.690874 54698 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I1109 16:12:25.704066 54698 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I1109 16:12:25.704318 54698 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I1109 16:14:24.434300 54698 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (1m58.729961391s) W1109 16:14:24.434696 54698 out.go:146] ๐Ÿ’ข initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-52-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1109 21:12:25.910140 856 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-52-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher ๐Ÿ’ข initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-52-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1109 21:12:25.910140 856 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-52-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher I1109 16:14:24.434948 54698 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force" I1109 16:14:26.024283 54698 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.584737934s) I1109 16:14:26.024397 54698 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet I1109 16:14:26.040984 54698 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I1109 16:14:26.114083 54698 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver I1109 16:14:26.114128 54698 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I1109 16:14:26.126727 54698 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I1109 16:14:26.126760 54698 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I1109 16:16:23.039901 54698 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (1m56.913125427s) I1109 16:16:23.040008 54698 kubeadm.go:326] StartCluster complete in 3m57.434171825s I1109 16:16:23.040105 54698 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I1109 16:16:23.111959 54698 logs.go:206] 0 containers: [] W1109 16:16:23.112072 54698 logs.go:208] No container was found matching "kube-apiserver" I1109 16:16:23.112109 54698 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I1109 16:16:23.176536 54698 logs.go:206] 0 containers: [] W1109 16:16:23.176603 54698 logs.go:208] No container was found matching "etcd" I1109 16:16:23.176638 54698 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I1109 16:16:23.237223 54698 logs.go:206] 0 containers: [] W1109 16:16:23.237425 54698 logs.go:208] No container was found matching "coredns" I1109 16:16:23.237462 54698 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I1109 16:16:23.304644 54698 logs.go:206] 0 containers: [] W1109 16:16:23.304723 54698 logs.go:208] No container was found matching "kube-scheduler" I1109 16:16:23.304758 54698 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I1109 16:16:23.373382 54698 logs.go:206] 0 containers: [] W1109 16:16:23.373446 54698 logs.go:208] No container was found matching "kube-proxy" I1109 16:16:23.373484 54698 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I1109 16:16:23.441163 54698 logs.go:206] 0 containers: [] W1109 16:16:23.441240 54698 logs.go:208] No container was found matching "kubernetes-dashboard" I1109 16:16:23.441280 54698 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I1109 16:16:23.528440 54698 logs.go:206] 0 containers: [] W1109 16:16:23.529959 54698 logs.go:208] No container was found matching "storage-provisioner" I1109 16:16:23.530007 54698 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I1109 16:16:23.600963 54698 logs.go:206] 0 containers: [] W1109 16:16:23.601042 54698 logs.go:208] No container was found matching "kube-controller-manager" I1109 16:16:23.601054 54698 logs.go:120] Gathering logs for describe nodes ... I1109 16:16:23.601063 54698 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W1109 16:16:23.748008 54698 logs.go:127] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** I1109 16:16:23.748079 54698 logs.go:120] Gathering logs for Docker ... I1109 16:16:23.748090 54698 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I1109 16:16:23.768971 54698 logs.go:120] Gathering logs for container status ... I1109 16:16:23.769088 54698 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I1109 16:16:24.028508 54698 logs.go:120] Gathering logs for kubelet ... I1109 16:16:24.028626 54698 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I1109 16:16:24.070442 54698 logs.go:120] Gathering logs for dmesg ... I1109 16:16:24.070528 54698 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" W1109 16:16:24.094807 54698 out.go:258] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-52-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1109 21:14:26.329195 4308 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-52-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W1109 16:16:24.095521 54698 out.go:146] W1109 16:16:24.096299 54698 out.go:146] ๐Ÿ’ฃ Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-52-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1109 21:14:26.329195 4308 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-52-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher ๐Ÿ’ฃ Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-52-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1109 21:14:26.329195 4308 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-52-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W1109 16:16:24.097513 54698 out.go:146] W1109 16:16:24.097908 54698 out.go:146] ๐Ÿ˜ฟ minikube is exiting due to an error. If the above message is not useful, open an issue: ๐Ÿ˜ฟ minikube is exiting due to an error. If the above message is not useful, open an issue: W1109 16:16:24.097932 54698 out.go:146] ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose I1109 16:16:24.124060 54698 out.go:110] W1109 16:16:24.124311 54698 out.go:146] โŒ Exiting due to K8S_KUBELET_NOT_RUNNING: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-52-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1109 21:14:26.329195 4308 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-52-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher โŒ Exiting due to K8S_KUBELET_NOT_RUNNING: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.19.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.4.0-52-generic DOCKER_VERSION: 19.03.8 DOCKER_GRAPH_DRIVER: overlay2 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_HUGETLB: enabled CGROUPS_PIDS: enabled [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: W1109 21:14:26.329195 4308 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-52-generic\n", err: exit status 1 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W1109 16:16:24.124571 54698 out.go:146] ๐Ÿ’ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start ๐Ÿ’ก Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W1109 16:16:24.124609 54698 out.go:146] ๐Ÿฟ Related issue: https://github.com/kubernetes/minikube/issues/4172 ๐Ÿฟ Related issue: https://github.com/kubernetes/minikube/issues/4172 I1109 16:16:24.135107 54698 out.go:110] rboal@pulsedev-VirtualBox:~/Desktop$