* * ==> Audit <== * |---------|-------------------------------------------------|----------|------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-------------------------------------------------|----------|------|---------|-------------------------------|-------------------------------| | start | | minikube | ben | v1.25.2 | Fri, 25 Mar 2022 16:33:41 CST | Fri, 25 Mar 2022 16:36:59 CST | | stop | | minikube | ben | v1.25.2 | Fri, 25 Mar 2022 19:19:58 CST | Fri, 25 Mar 2022 19:20:14 CST | | start | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 12:03:41 CST | Sun, 03 Apr 2022 12:04:19 CST | | stop | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 12:10:16 CST | Sun, 03 Apr 2022 12:10:30 CST | | start | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 12:11:55 CST | Sun, 03 Apr 2022 12:12:24 CST | | start | --apiserver-ips=192.168.10.227 | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 12:21:40 CST | Sun, 03 Apr 2022 12:21:50 CST | | ip | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 12:28:16 CST | Sun, 03 Apr 2022 12:28:17 CST | | stop | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 12:28:32 CST | Sun, 03 Apr 2022 12:28:46 CST | | ip | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 14:06:01 CST | Sun, 03 Apr 2022 14:06:02 CST | | stop | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 14:06:21 CST | Sun, 03 Apr 2022 14:06:36 CST | | stop | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 14:28:12 CST | Sun, 03 Apr 2022 14:28:26 CST | | start | --extra-config=apiserver.enable-swagger-ui=true | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 14:30:40 CST | Sun, 03 Apr 2022 14:31:03 CST | | | --apiserver-ips=192.168.10.227 | | | | | | | ip | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 14:31:30 CST | Sun, 03 Apr 2022 14:31:30 CST | | ip | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 14:32:37 CST | Sun, 03 Apr 2022 14:32:37 CST | | stop | | minikube | ben | v1.25.2 | Sun, 03 Apr 2022 23:58:51 CST | Sun, 03 Apr 2022 23:59:08 CST | | start | --extra-config=apiserver.enable-swagger-ui=true | minikube | ben | v1.25.2 | Sat, 16 Apr 2022 10:12:30 CST | Sat, 16 Apr 2022 10:12:57 CST | | | --apiserver-ips=192.168.10.227 | | | | | | |---------|-------------------------------------------------|----------|------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/04/16 10:12:30 Running on machine: bens-MBPR16 Binary: Built with gc go1.17.7 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0416 10:12:30.349885 13614 out.go:297] Setting OutFile to fd 1 ... I0416 10:12:30.350303 13614 out.go:349] isatty.IsTerminal(1) = true I0416 10:12:30.350306 13614 out.go:310] Setting ErrFile to fd 2... I0416 10:12:30.350311 13614 out.go:349] isatty.IsTerminal(2) = true I0416 10:12:30.350898 13614 root.go:315] Updating PATH: /Users/ben/.minikube/bin W0416 10:12:30.351002 13614 root.go:293] Error reading config file at /Users/ben/.minikube/config/config.json: open /Users/ben/.minikube/config/config.json: no such file or directory I0416 10:12:30.352289 13614 out.go:304] Setting JSON to false I0416 10:12:30.390115 13614 start.go:112] hostinfo: {"hostname":"bens-MBPR16.local","uptime":68121,"bootTime":1650007029,"procs":693,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.3.1","kernelVersion":"21.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"eb44b0fc-a129-5288-bcdf-52db369fcf37"} W0416 10:12:30.390261 13614 start.go:120] gopshost.Virtualization returned error: not implemented yet I0416 10:12:30.411626 13614 out.go:176] ๐Ÿ˜„ minikube v1.25.2 on Darwin 12.3.1 I0416 10:12:30.412442 13614 notify.go:193] Checking for updates... I0416 10:12:30.413462 13614 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0416 10:12:30.414657 13614 driver.go:344] Setting default libvirt URI to qemu:///system I0416 10:12:30.760325 13614 docker.go:132] docker version: linux-20.10.13 I0416 10:12:30.761119 13614 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0416 10:12:31.446076 13614 info.go:263] docker info: {ID:V5KF:WVE2:RXYD:3P5X:2ULN:ZHNG:BWOG:EZ75:7Z4B:6F2X:2YR7:VG26 Containers:9 ContainersRunning:0 ContainersPaused:0 ContainersStopped:9 Images:24 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-04-16 02:12:30.926982023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:8346030080 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0416 10:12:31.485173 13614 out.go:176] โœจ Using the docker driver based on existing profile I0416 10:12:31.485371 13614 start.go:281] selected driver: docker I0416 10:12:31.485389 13614 start.go:798] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:7911 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[192.168.10.227] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-swagger-ui Value:true} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0416 10:12:31.485616 13614 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0416 10:12:31.486127 13614 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0416 10:12:31.759033 13614 info.go:263] docker info: {ID:V5KF:WVE2:RXYD:3P5X:2ULN:ZHNG:BWOG:EZ75:7Z4B:6F2X:2YR7:VG26 Containers:9 ContainersRunning:0 ContainersPaused:0 ContainersStopped:9 Images:24 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-04-16 02:12:31.661165069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:8346030080 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0416 10:12:31.763318 13614 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0416 10:12:31.763340 13614 cni.go:93] Creating CNI manager for "" I0416 10:12:31.763347 13614 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0416 10:12:31.763353 13614 start_flags.go:302] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:7911 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[192.168.10.227] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-swagger-ui Value:true} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0416 10:12:31.803684 13614 out.go:176] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0416 10:12:31.804200 13614 cache.go:120] Beginning downloading kic base image for docker with docker I0416 10:12:31.823642 13614 out.go:176] ๐Ÿšœ Pulling base image ... I0416 10:12:31.823708 13614 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0416 10:12:31.823835 13614 preload.go:148] Found local preload: /Users/ben/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 I0416 10:12:31.823854 13614 cache.go:57] Caching tarball of preloaded images I0416 10:12:31.824227 13614 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0416 10:12:31.842955 13614 preload.go:174] Found /Users/ben/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0416 10:12:31.842983 13614 cache.go:60] Finished verifying existence of preloaded tar for v1.23.3 on docker I0416 10:12:31.844001 13614 profile.go:148] Saving config to /Users/ben/.minikube/profiles/minikube/config.json ... I0416 10:12:32.016010 13614 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0416 10:12:32.016029 13614 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0416 10:12:32.016038 13614 cache.go:208] Successfully downloaded all kic artifacts I0416 10:12:32.016083 13614 start.go:313] acquiring machines lock for minikube: {Name:mk60321c73fb6e0f1c05dd22831a085a4897d910 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0416 10:12:32.016252 13614 start.go:317] acquired machines lock for "minikube" in 150.876ยตs I0416 10:12:32.016277 13614 start.go:93] Skipping create...Using existing machine configuration I0416 10:12:32.016288 13614 fix.go:55] fixHost starting: I0416 10:12:32.016492 13614 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0416 10:12:32.203974 13614 fix.go:108] recreateIfNeeded on minikube: state=Stopped err= W0416 10:12:32.204035 13614 fix.go:134] unexpected machine state, will restart: I0416 10:12:32.223773 13614 out.go:176] ๐Ÿ”„ Restarting existing docker container for "minikube" ... I0416 10:12:32.223974 13614 cli_runner.go:133] Run: docker start minikube I0416 10:12:32.966136 13614 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0416 10:12:33.149700 13614 kic.go:420] container "minikube" state is running. I0416 10:12:33.151040 13614 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0416 10:12:33.335490 13614 profile.go:148] Saving config to /Users/ben/.minikube/profiles/minikube/config.json ... I0416 10:12:33.335937 13614 machine.go:88] provisioning docker machine ... I0416 10:12:33.335960 13614 ubuntu.go:169] provisioning hostname "minikube" I0416 10:12:33.336038 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:33.545711 13614 main.go:130] libmachine: Using SSH client type: native I0416 10:12:33.546308 13614 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x13989a0] 0x139ba80 [] 0s} 127.0.0.1 55666 } I0416 10:12:33.546322 13614 main.go:130] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0416 10:12:33.722392 13614 main.go:130] libmachine: SSH cmd err, output: : minikube I0416 10:12:33.722452 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:33.905322 13614 main.go:130] libmachine: Using SSH client type: native I0416 10:12:33.905508 13614 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x13989a0] 0x139ba80 [] 0s} 127.0.0.1 55666 } I0416 10:12:33.905519 13614 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0416 10:12:34.036143 13614 main.go:130] libmachine: SSH cmd err, output: : I0416 10:12:34.036171 13614 ubuntu.go:175] set auth options {CertDir:/Users/ben/.minikube CaCertPath:/Users/ben/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/ben/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/ben/.minikube/machines/server.pem ServerKeyPath:/Users/ben/.minikube/machines/server-key.pem ClientKeyPath:/Users/ben/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/ben/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/ben/.minikube} I0416 10:12:34.036204 13614 ubuntu.go:177] setting up certificates I0416 10:12:34.036232 13614 provision.go:83] configureAuth start I0416 10:12:34.036336 13614 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0416 10:12:34.238865 13614 provision.go:138] copyHostCerts I0416 10:12:34.239062 13614 exec_runner.go:144] found /Users/ben/.minikube/ca.pem, removing ... I0416 10:12:34.239071 13614 exec_runner.go:207] rm: /Users/ben/.minikube/ca.pem I0416 10:12:34.239168 13614 exec_runner.go:151] cp: /Users/ben/.minikube/certs/ca.pem --> /Users/ben/.minikube/ca.pem (1070 bytes) I0416 10:12:34.239713 13614 exec_runner.go:144] found /Users/ben/.minikube/cert.pem, removing ... I0416 10:12:34.239717 13614 exec_runner.go:207] rm: /Users/ben/.minikube/cert.pem I0416 10:12:34.239797 13614 exec_runner.go:151] cp: /Users/ben/.minikube/certs/cert.pem --> /Users/ben/.minikube/cert.pem (1111 bytes) I0416 10:12:34.240315 13614 exec_runner.go:144] found /Users/ben/.minikube/key.pem, removing ... I0416 10:12:34.240321 13614 exec_runner.go:207] rm: /Users/ben/.minikube/key.pem I0416 10:12:34.240415 13614 exec_runner.go:151] cp: /Users/ben/.minikube/certs/key.pem --> /Users/ben/.minikube/key.pem (1675 bytes) I0416 10:12:34.240657 13614 provision.go:112] generating server cert: /Users/ben/.minikube/machines/server.pem ca-key=/Users/ben/.minikube/certs/ca.pem private-key=/Users/ben/.minikube/certs/ca-key.pem org=ben.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0416 10:12:34.464447 13614 provision.go:172] copyRemoteCerts I0416 10:12:34.465377 13614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0416 10:12:34.465454 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:34.654568 13614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55666 SSHKeyPath:/Users/ben/.minikube/machines/minikube/id_rsa Username:docker} I0416 10:12:34.749025 13614 ssh_runner.go:362] scp /Users/ben/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes) I0416 10:12:34.775974 13614 ssh_runner.go:362] scp /Users/ben/.minikube/machines/server.pem --> /etc/docker/server.pem (1192 bytes) I0416 10:12:34.797801 13614 ssh_runner.go:362] scp /Users/ben/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0416 10:12:34.856046 13614 provision.go:86] duration metric: configureAuth took 819.19803ms I0416 10:12:34.856057 13614 ubuntu.go:193] setting minikube options for container-runtime I0416 10:12:34.856241 13614 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0416 10:12:34.856281 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:35.040321 13614 main.go:130] libmachine: Using SSH client type: native I0416 10:12:35.040532 13614 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x13989a0] 0x139ba80 [] 0s} 127.0.0.1 55666 } I0416 10:12:35.040563 13614 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0416 10:12:35.182405 13614 main.go:130] libmachine: SSH cmd err, output: : overlay I0416 10:12:35.182431 13614 ubuntu.go:71] root file system type: overlay I0416 10:12:35.182647 13614 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0416 10:12:35.182729 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:35.370657 13614 main.go:130] libmachine: Using SSH client type: native I0416 10:12:35.370836 13614 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x13989a0] 0x139ba80 [] 0s} 127.0.0.1 55666 } I0416 10:12:35.370895 13614 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0416 10:12:35.534352 13614 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0416 10:12:35.534422 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:35.720266 13614 main.go:130] libmachine: Using SSH client type: native I0416 10:12:35.720473 13614 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x13989a0] 0x139ba80 [] 0s} 127.0.0.1 55666 } I0416 10:12:35.720483 13614 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0416 10:12:35.859803 13614 main.go:130] libmachine: SSH cmd err, output: : I0416 10:12:35.859814 13614 machine.go:91] provisioned docker machine in 2.523830583s I0416 10:12:35.859842 13614 start.go:267] post-start starting for "minikube" (driver="docker") I0416 10:12:35.859847 13614 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0416 10:12:35.859906 13614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0416 10:12:35.859939 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:36.040316 13614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55666 SSHKeyPath:/Users/ben/.minikube/machines/minikube/id_rsa Username:docker} I0416 10:12:36.137285 13614 ssh_runner.go:195] Run: cat /etc/os-release I0416 10:12:36.142870 13614 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0416 10:12:36.142885 13614 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0416 10:12:36.142891 13614 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0416 10:12:36.142896 13614 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0416 10:12:36.142903 13614 filesync.go:126] Scanning /Users/ben/.minikube/addons for local assets ... I0416 10:12:36.143072 13614 filesync.go:126] Scanning /Users/ben/.minikube/files for local assets ... I0416 10:12:36.143119 13614 start.go:270] post-start completed in 283.266898ms I0416 10:12:36.143523 13614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0416 10:12:36.143560 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:36.325363 13614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55666 SSHKeyPath:/Users/ben/.minikube/machines/minikube/id_rsa Username:docker} I0416 10:12:36.423743 13614 fix.go:57] fixHost completed within 4.407386151s I0416 10:12:36.423767 13614 start.go:80] releasing machines lock for "minikube", held for 4.407428448s I0416 10:12:36.424184 13614 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0416 10:12:36.605604 13614 ssh_runner.go:195] Run: systemctl --version I0416 10:12:36.605653 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:36.606516 13614 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0416 10:12:36.608173 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:36.792391 13614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55666 SSHKeyPath:/Users/ben/.minikube/machines/minikube/id_rsa Username:docker} I0416 10:12:36.792396 13614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55666 SSHKeyPath:/Users/ben/.minikube/machines/minikube/id_rsa Username:docker} I0416 10:12:37.022996 13614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0416 10:12:37.036595 13614 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0416 10:12:37.050484 13614 cruntime.go:272] skipping containerd shutdown because we are bound to it I0416 10:12:37.050824 13614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0416 10:12:37.065031 13614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0416 10:12:37.083108 13614 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0416 10:12:37.158669 13614 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0416 10:12:37.229700 13614 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0416 10:12:37.242825 13614 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0416 10:12:37.313090 13614 ssh_runner.go:195] Run: sudo systemctl start docker I0416 10:12:37.325521 13614 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0416 10:12:37.519456 13614 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0416 10:12:37.595692 13614 out.go:203] ๐Ÿณ Preparing Kubernetes v1.23.3 on Docker 20.10.12 ... I0416 10:12:37.596935 13614 cli_runner.go:133] Run: docker exec -t minikube dig +short host.docker.internal I0416 10:12:37.929377 13614 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2 I0416 10:12:37.929846 13614 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I0416 10:12:37.936230 13614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0416 10:12:37.948064 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0416 10:12:38.142585 13614 out.go:176] โ–ช apiserver.enable-swagger-ui=true I0416 10:12:38.163044 13614 out.go:176] โ–ช kubelet.housekeeping-interval=5m I0416 10:12:38.164117 13614 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0416 10:12:38.164190 13614 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0416 10:12:38.206581 13614 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0416 10:12:38.206972 13614 docker.go:537] Images already preloaded, skipping extraction I0416 10:12:38.207412 13614 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0416 10:12:38.248233 13614 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0416 10:12:38.249081 13614 cache_images.go:84] Images are preloaded, skipping loading I0416 10:12:38.249397 13614 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0416 10:12:38.622744 13614 cni.go:93] Creating CNI manager for "" I0416 10:12:38.622774 13614 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0416 10:12:38.623436 13614 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0416 10:12:38.623463 13614 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota enable-swagger-ui:true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0416 10:12:38.623864 13614 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" enable-swagger-ui: "true" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0416 10:12:38.625087 13614 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[192.168.10.227] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-swagger-ui Value:true} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0416 10:12:38.625160 13614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3 I0416 10:12:38.637800 13614 binaries.go:44] Found k8s binaries, skipping transfer I0416 10:12:38.637891 13614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0416 10:12:38.646742 13614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes) I0416 10:12:38.665255 13614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0416 10:12:38.684540 13614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2059 bytes) I0416 10:12:38.701662 13614 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0416 10:12:38.706396 13614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0416 10:12:38.719200 13614 certs.go:54] Setting up /Users/ben/.minikube/profiles/minikube for IP: 192.168.49.2 I0416 10:12:38.720295 13614 certs.go:182] skipping minikubeCA CA generation: /Users/ben/.minikube/ca.key I0416 10:12:38.720618 13614 certs.go:182] skipping proxyClientCA CA generation: /Users/ben/.minikube/proxy-client-ca.key I0416 10:12:38.721059 13614 certs.go:298] skipping minikube-user signed cert generation: /Users/ben/.minikube/profiles/minikube/client.key I0416 10:12:38.722085 13614 certs.go:298] skipping minikube signed cert generation: /Users/ben/.minikube/profiles/minikube/apiserver.key.4e20e1f2 I0416 10:12:38.722305 13614 certs.go:298] skipping aggregator signed cert generation: /Users/ben/.minikube/profiles/minikube/proxy-client.key I0416 10:12:38.722702 13614 certs.go:388] found cert: /Users/ben/.minikube/certs/Users/ben/.minikube/certs/ca-key.pem (1675 bytes) I0416 10:12:38.722746 13614 certs.go:388] found cert: /Users/ben/.minikube/certs/Users/ben/.minikube/certs/ca.pem (1070 bytes) I0416 10:12:38.722818 13614 certs.go:388] found cert: /Users/ben/.minikube/certs/Users/ben/.minikube/certs/cert.pem (1111 bytes) I0416 10:12:38.722847 13614 certs.go:388] found cert: /Users/ben/.minikube/certs/Users/ben/.minikube/certs/key.pem (1675 bytes) I0416 10:12:38.727201 13614 ssh_runner.go:362] scp /Users/ben/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1407 bytes) I0416 10:12:38.751423 13614 ssh_runner.go:362] scp /Users/ben/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0416 10:12:38.775008 13614 ssh_runner.go:362] scp /Users/ben/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0416 10:12:38.798940 13614 ssh_runner.go:362] scp /Users/ben/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0416 10:12:38.822938 13614 ssh_runner.go:362] scp /Users/ben/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0416 10:12:38.846386 13614 ssh_runner.go:362] scp /Users/ben/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0416 10:12:38.872266 13614 ssh_runner.go:362] scp /Users/ben/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0416 10:12:38.897929 13614 ssh_runner.go:362] scp /Users/ben/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0416 10:12:38.921749 13614 ssh_runner.go:362] scp /Users/ben/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0416 10:12:38.944745 13614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0416 10:12:38.964334 13614 ssh_runner.go:195] Run: openssl version I0416 10:12:38.975276 13614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0416 10:12:38.988007 13614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0416 10:12:38.993424 13614 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 08:36 /usr/share/ca-certificates/minikubeCA.pem I0416 10:12:38.993487 13614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0416 10:12:39.000454 13614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0416 10:12:39.009552 13614 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:7911 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[192.168.10.227] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-swagger-ui Value:true} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0416 10:12:39.009697 13614 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0416 10:12:39.049832 13614 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0416 10:12:39.060849 13614 kubeadm.go:402] found existing configuration files, will attempt cluster restart I0416 10:12:39.060863 13614 kubeadm.go:601] restartCluster start I0416 10:12:39.060946 13614 ssh_runner.go:195] Run: sudo test -d /data/minikube I0416 10:12:39.071155 13614 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0416 10:12:39.071200 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0416 10:12:39.253733 13614 kubeconfig.go:116] verify returned: extract IP: "minikube" does not appear in /Users/ben/.kube/config I0416 10:12:39.253832 13614 kubeconfig.go:127] "minikube" context is missing from /Users/ben/.kube/config - will repair! I0416 10:12:39.255523 13614 lock.go:35] WriteFile acquiring /Users/ben/.kube/config: {Name:mk48e14323867c6ae4299291702c5877dfc2958f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0416 10:12:39.269437 13614 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0416 10:12:39.280144 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:39.280209 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:39.297725 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:39.498311 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:39.498399 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:39.520419 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:39.698291 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:39.698434 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:39.719091 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:39.898428 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:39.898571 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:39.921218 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:40.098865 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:40.099021 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:40.120426 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:40.298937 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:40.299223 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:40.321239 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:40.497924 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:40.498004 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:40.518617 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:40.698063 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:40.698185 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:40.718597 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:40.898425 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:40.899270 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:40.915642 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:41.098174 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:41.098231 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:41.115201 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:41.297924 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:41.297992 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:41.313832 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:41.498323 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:41.498396 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:41.514637 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:41.697972 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:41.698102 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:41.715452 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:41.898725 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:41.898899 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:41.915239 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:42.098154 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:42.098292 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:42.118758 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:42.298147 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:42.298322 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:42.319049 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:42.319056 13614 api_server.go:165] Checking apiserver status ... I0416 10:12:42.319097 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0416 10:12:42.334461 13614 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0416 10:12:42.334472 13614 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition I0416 10:12:42.334479 13614 kubeadm.go:1067] stopping kube-system containers ... I0416 10:12:42.334525 13614 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0416 10:12:42.375669 13614 docker.go:438] Stopping containers: [8db28b80f344 83e4fb35a80f a62cfbd7ac5a 0586f42fc4bb 656e3236273d 8d1ca1ec0902 1822d9c3c48f d7d7a7a16fa6 49b1b184726c e4fb554eb84c 2cb7dd10da55 33456430bcd7 97c9771edd20 ccdd82eb9938 1e8e22e3d3e9 2e96fa386115 9e4d74696a2f 41409227e85f 0268d88119b1 56d0f2c50e27 e440f93b401b] I0416 10:12:42.375785 13614 ssh_runner.go:195] Run: docker stop 8db28b80f344 83e4fb35a80f a62cfbd7ac5a 0586f42fc4bb 656e3236273d 8d1ca1ec0902 1822d9c3c48f d7d7a7a16fa6 49b1b184726c e4fb554eb84c 2cb7dd10da55 33456430bcd7 97c9771edd20 ccdd82eb9938 1e8e22e3d3e9 2e96fa386115 9e4d74696a2f 41409227e85f 0268d88119b1 56d0f2c50e27 e440f93b401b I0416 10:12:42.418746 13614 ssh_runner.go:195] Run: sudo systemctl stop kubelet I0416 10:12:42.432034 13614 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0416 10:12:42.442322 13614 kubeadm.go:155] found existing configuration files: -rw------- 1 root root 5643 Apr 3 04:37 /etc/kubernetes/admin.conf -rw------- 1 root root 5656 Apr 3 06:30 /etc/kubernetes/controller-manager.conf -rw------- 1 root root 5659 Apr 3 04:37 /etc/kubernetes/kubelet.conf -rw------- 1 root root 5604 Apr 3 06:30 /etc/kubernetes/scheduler.conf I0416 10:12:42.442367 13614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0416 10:12:42.451713 13614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0416 10:12:42.461892 13614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0416 10:12:42.472083 13614 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1 stdout: stderr: I0416 10:12:42.472126 13614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0416 10:12:42.481063 13614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0416 10:12:42.490805 13614 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1 stdout: stderr: I0416 10:12:42.490853 13614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0416 10:12:42.500175 13614 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0416 10:12:42.510124 13614 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0416 10:12:42.510154 13614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0416 10:12:42.705572 13614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0416 10:12:43.434820 13614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" I0416 10:12:43.588502 13614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0416 10:12:43.650004 13614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0416 10:12:43.708407 13614 api_server.go:51] waiting for apiserver process to appear ... I0416 10:12:43.708473 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0416 10:12:44.225505 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0416 10:12:44.725595 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0416 10:12:45.224406 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0416 10:12:45.725093 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0416 10:12:46.225667 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0416 10:12:46.255665 13614 api_server.go:71] duration metric: took 2.547216038s to wait for apiserver process to appear ... I0416 10:12:46.255682 13614 api_server.go:87] waiting for apiserver healthz status ... I0416 10:12:46.255965 13614 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55665/healthz ... I0416 10:12:46.258530 13614 api_server.go:256] stopped: https://127.0.0.1:55665/healthz: Get "https://127.0.0.1:55665/healthz": EOF I0416 10:12:46.758888 13614 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55665/healthz ... I0416 10:12:46.761483 13614 api_server.go:256] stopped: https://127.0.0.1:55665/healthz: Get "https://127.0.0.1:55665/healthz": EOF I0416 10:12:47.258678 13614 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55665/healthz ... I0416 10:12:47.260528 13614 api_server.go:256] stopped: https://127.0.0.1:55665/healthz: Get "https://127.0.0.1:55665/healthz": EOF I0416 10:12:47.758751 13614 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55665/healthz ... I0416 10:12:50.333959 13614 api_server.go:266] https://127.0.0.1:55665/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0416 10:12:50.333972 13614 api_server.go:102] status: https://127.0.0.1:55665/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} I0416 10:12:50.758895 13614 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55665/healthz ... I0416 10:12:50.769598 13614 api_server.go:266] https://127.0.0.1:55665/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0416 10:12:50.769611 13614 api_server.go:102] status: https://127.0.0.1:55665/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0416 10:12:51.258821 13614 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55665/healthz ... I0416 10:12:51.269412 13614 api_server.go:266] https://127.0.0.1:55665/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0416 10:12:51.269425 13614 api_server.go:102] status: https://127.0.0.1:55665/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0416 10:12:51.758824 13614 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55665/healthz ... I0416 10:12:51.768470 13614 api_server.go:266] https://127.0.0.1:55665/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0416 10:12:51.768485 13614 api_server.go:102] status: https://127.0.0.1:55665/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed I0416 10:12:52.258742 13614 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55665/healthz ... I0416 10:12:52.267927 13614 api_server.go:266] https://127.0.0.1:55665/healthz returned 200: ok I0416 10:12:52.281899 13614 api_server.go:140] control plane version: v1.23.3 I0416 10:12:52.281909 13614 api_server.go:130] duration metric: took 6.026126046s to wait for apiserver health ... I0416 10:12:52.281914 13614 cni.go:93] Creating CNI manager for "" I0416 10:12:52.281917 13614 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0416 10:12:52.282761 13614 system_pods.go:43] waiting for kube-system pods to appear ... I0416 10:12:52.316417 13614 system_pods.go:59] 7 kube-system pods found I0416 10:12:52.316434 13614 system_pods.go:61] "coredns-64897985d-8994j" [6b42dd72-545f-4b4b-b5ee-2010af1f9b07] Running I0416 10:12:52.316436 13614 system_pods.go:61] "etcd-minikube" [0d4662fe-c11f-4563-bfec-4c4895365e2e] Running I0416 10:12:52.316439 13614 system_pods.go:61] "kube-apiserver-minikube" [be57b6b6-83d6-4f96-90e0-9c518ec519e5] Running I0416 10:12:52.316443 13614 system_pods.go:61] "kube-controller-manager-minikube" [5e7ba8d6-af06-453f-bdc5-1b7d0420976d] Running I0416 10:12:52.316447 13614 system_pods.go:61] "kube-proxy-4d2p9" [f179e749-94c0-46ab-a66b-1fd5cd81abcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy]) I0416 10:12:52.316450 13614 system_pods.go:61] "kube-scheduler-minikube" [97f5cb3a-faa7-42b3-ab59-a32ad7874499] Running I0416 10:12:52.316460 13614 system_pods.go:61] "storage-provisioner" [c737542c-6d23-45ef-bc2c-adb5b899f700] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0416 10:12:52.316463 13614 system_pods.go:74] duration metric: took 33.696942ms to wait for pod list to return data ... I0416 10:12:52.316749 13614 node_conditions.go:102] verifying NodePressure condition ... I0416 10:12:52.339538 13614 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki I0416 10:12:52.339849 13614 node_conditions.go:123] node cpu capacity is 8 I0416 10:12:52.339861 13614 node_conditions.go:105] duration metric: took 23.107701ms to run NodePressure ... I0416 10:12:52.339873 13614 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0416 10:12:53.655047 13614 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.315141378s) I0416 10:12:53.655062 13614 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0416 10:12:53.751958 13614 ops.go:34] apiserver oom_adj: -16 I0416 10:12:53.751987 13614 kubeadm.go:605] restartCluster took 14.690875689s I0416 10:12:53.752004 13614 kubeadm.go:393] StartCluster complete in 14.742210715s I0416 10:12:53.752016 13614 settings.go:142] acquiring lock: {Name:mkd92290b6dafcf56a65dcd1b9c995cda4c551c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0416 10:12:53.752279 13614 settings.go:150] Updating kubeconfig: /Users/ben/.kube/config I0416 10:12:53.754230 13614 lock.go:35] WriteFile acquiring /Users/ben/.kube/config: {Name:mk48e14323867c6ae4299291702c5877dfc2958f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0416 10:12:53.762550 13614 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0416 10:12:53.762813 13614 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0416 10:12:53.763094 13614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0416 10:12:53.789577 13614 out.go:176] ๐Ÿ”Ž Verifying Kubernetes components... I0416 10:12:53.763845 13614 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[] I0416 10:12:53.763896 13614 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0416 10:12:53.789676 13614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0416 10:12:53.790022 13614 addons.go:65] Setting dashboard=true in profile "minikube" I0416 10:12:53.790022 13614 addons.go:65] Setting default-storageclass=true in profile "minikube" I0416 10:12:53.790031 13614 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0416 10:12:53.790369 13614 addons.go:153] Setting addon dashboard=true in "minikube" W0416 10:12:53.790376 13614 addons.go:165] addon dashboard should already be in state true I0416 10:12:53.790377 13614 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0416 10:12:53.790379 13614 addons.go:153] Setting addon storage-provisioner=true in "minikube" W0416 10:12:53.790385 13614 addons.go:165] addon storage-provisioner should already be in state true I0416 10:12:53.790697 13614 host.go:66] Checking if "minikube" exists ... I0416 10:12:53.790720 13614 host.go:66] Checking if "minikube" exists ... I0416 10:12:53.791866 13614 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0416 10:12:53.792653 13614 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0416 10:12:53.792781 13614 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0416 10:12:53.854003 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0416 10:12:54.057015 13614 out.go:176] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0416 10:12:54.038207 13614 out.go:176] โ–ช Using image kubernetesui/dashboard:v2.3.1 I0416 10:12:54.048600 13614 addons.go:153] Setting addon default-storageclass=true in "minikube" W0416 10:12:54.057078 13614 addons.go:165] addon default-storageclass should already be in state true I0416 10:12:54.057119 13614 host.go:66] Checking if "minikube" exists ... I0416 10:12:54.057185 13614 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0416 10:12:54.057192 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0416 10:12:54.057249 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:54.058655 13614 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0416 10:12:54.077464 13614 out.go:176] โ–ช Using image kubernetesui/metrics-scraper:v1.0.7 I0416 10:12:54.077636 13614 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml I0416 10:12:54.077644 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes) I0416 10:12:54.078614 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:54.100254 13614 api_server.go:51] waiting for apiserver process to appear ... I0416 10:12:54.100358 13614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0416 10:12:54.275212 13614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55666 SSHKeyPath:/Users/ben/.minikube/machines/minikube/id_rsa Username:docker} I0416 10:12:54.291045 13614 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0416 10:12:54.291054 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0416 10:12:54.291131 13614 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0416 10:12:54.291201 13614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55666 SSHKeyPath:/Users/ben/.minikube/machines/minikube/id_rsa Username:docker} I0416 10:12:54.486869 13614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55666 SSHKeyPath:/Users/ben/.minikube/machines/minikube/id_rsa Username:docker} I0416 10:12:54.651279 13614 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml I0416 10:12:54.651291 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes) I0416 10:12:54.734917 13614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0416 10:12:54.835064 13614 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml I0416 10:12:54.835078 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes) I0416 10:12:54.951449 13614 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml I0416 10:12:54.951459 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes) I0416 10:12:54.955998 13614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0416 10:12:55.139213 13614 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml I0416 10:12:55.139224 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4278 bytes) I0416 10:12:55.241671 13614 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml I0416 10:12:55.241683 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes) I0416 10:12:55.434285 13614 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml I0416 10:12:55.434297 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes) I0416 10:12:55.542287 13614 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml I0416 10:12:55.542300 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes) I0416 10:12:55.568244 13614 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml I0416 10:12:55.568257 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes) I0416 10:12:55.737685 13614 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml I0416 10:12:55.737695 13614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes) I0416 10:12:55.759565 13614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml I0416 10:12:56.343159 13614 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (2.553530902s) I0416 10:12:56.343198 13614 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.242795373s) I0416 10:12:56.343209 13614 api_server.go:71] duration metric: took 2.580312934s to wait for apiserver process to appear ... I0416 10:12:56.343216 13614 api_server.go:87] waiting for apiserver healthz status ... I0416 10:12:56.343225 13614 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55665/healthz ... I0416 10:12:56.343263 13614 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping... I0416 10:12:56.354336 13614 api_server.go:266] https://127.0.0.1:55665/healthz returned 200: ok I0416 10:12:56.356928 13614 api_server.go:140] control plane version: v1.23.3 I0416 10:12:56.356936 13614 api_server.go:130] duration metric: took 13.715731ms to wait for apiserver health ... I0416 10:12:56.356940 13614 system_pods.go:43] waiting for kube-system pods to appear ... I0416 10:12:56.380687 13614 system_pods.go:59] 7 kube-system pods found I0416 10:12:56.380699 13614 system_pods.go:61] "coredns-64897985d-8994j" [6b42dd72-545f-4b4b-b5ee-2010af1f9b07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0416 10:12:56.380702 13614 system_pods.go:61] "etcd-minikube" [0d4662fe-c11f-4563-bfec-4c4895365e2e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0416 10:12:56.380706 13614 system_pods.go:61] "kube-apiserver-minikube" [be57b6b6-83d6-4f96-90e0-9c518ec519e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0416 10:12:56.380710 13614 system_pods.go:61] "kube-controller-manager-minikube" [5e7ba8d6-af06-453f-bdc5-1b7d0420976d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0416 10:12:56.380716 13614 system_pods.go:61] "kube-proxy-4d2p9" [f179e749-94c0-46ab-a66b-1fd5cd81abcb] Running I0416 10:12:56.380719 13614 system_pods.go:61] "kube-scheduler-minikube" [97f5cb3a-faa7-42b3-ab59-a32ad7874499] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0416 10:12:56.380722 13614 system_pods.go:61] "storage-provisioner" [c737542c-6d23-45ef-bc2c-adb5b899f700] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0416 10:12:56.380725 13614 system_pods.go:74] duration metric: took 23.782289ms to wait for pod list to return data ... I0416 10:12:56.380729 13614 kubeadm.go:548] duration metric: took 2.617834146s to wait for : map[apiserver:true system_pods:true] ... I0416 10:12:56.380738 13614 node_conditions.go:102] verifying NodePressure condition ... I0416 10:12:56.438822 13614 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki I0416 10:12:56.438833 13614 node_conditions.go:123] node cpu capacity is 8 I0416 10:12:56.438846 13614 node_conditions.go:105] duration metric: took 58.104634ms to run NodePressure ... I0416 10:12:56.438854 13614 start.go:213] waiting for startup goroutines ... I0416 10:12:57.257751 13614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.522771506s) I0416 10:12:57.257817 13614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.301768536s) I0416 10:12:57.357203 13614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.597591955s) I0416 10:12:57.379037 13614 out.go:176] ๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass, dashboard I0416 10:12:57.379062 13614 addons.go:417] enableAddons completed in 3.616092326s I0416 10:12:57.444530 13614 start.go:496] kubectl: 1.22.5, cluster: 1.23.3 (minor skew: 1) I0416 10:12:57.464649 13614 out.go:176] ๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Sat 2022-04-16 02:12:33 UTC, end at Sat 2022-04-16 02:13:01 UTC. -- Apr 16 02:12:33 minikube systemd[1]: Starting Docker Application Container Engine... Apr 16 02:12:33 minikube dockerd[133]: time="2022-04-16T02:12:33.947591057Z" level=info msg="Starting up" Apr 16 02:12:33 minikube dockerd[133]: time="2022-04-16T02:12:33.958511021Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 16 02:12:33 minikube dockerd[133]: time="2022-04-16T02:12:33.958583725Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 16 02:12:33 minikube dockerd[133]: time="2022-04-16T02:12:33.958617598Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 16 02:12:33 minikube dockerd[133]: time="2022-04-16T02:12:33.958643370Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 16 02:12:33 minikube dockerd[133]: time="2022-04-16T02:12:33.964806583Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 16 02:12:33 minikube dockerd[133]: time="2022-04-16T02:12:33.964859663Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 16 02:12:33 minikube dockerd[133]: time="2022-04-16T02:12:33.964894723Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 16 02:12:33 minikube dockerd[133]: time="2022-04-16T02:12:33.964926012Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 16 02:12:33 minikube dockerd[133]: time="2022-04-16T02:12:33.987197488Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Apr 16 02:12:34 minikube dockerd[133]: time="2022-04-16T02:12:34.084769107Z" level=info msg="Loading containers: start." Apr 16 02:12:34 minikube dockerd[133]: time="2022-04-16T02:12:34.547272441Z" level=info msg="Removing stale sandbox f59d72b4273194b9eeb75d527df46cfc7be3649f2d52bff64a665e11dca56b5a (a62cfbd7ac5a5d7082695bbd39f0aaee725b4646f5f158fe86ff54d346cda4e3)" Apr 16 02:12:34 minikube dockerd[133]: time="2022-04-16T02:12:34.557776779Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint dd42541febfd1405db0c3c62d55ec55df4b4b383e4985ab0a1796a26a033b167 6c8fabc891b45e20e931b51df61f18bddb1b2f4291f3338e6574f9dc12857aab], retrying...." Apr 16 02:12:34 minikube dockerd[133]: time="2022-04-16T02:12:34.674124527Z" level=info msg="Removing stale sandbox 11c99c46fed7471a6085f533da9d67a5b4be9d7b922e1de545dc150317291d43 (c4045e81341f61ba44a75e2a09a7542c8400a77754f056c5314129447af504f6)" Apr 16 02:12:34 minikube dockerd[133]: time="2022-04-16T02:12:34.678409150Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint dd42541febfd1405db0c3c62d55ec55df4b4b383e4985ab0a1796a26a033b167 3712a5f6319851a50c64ba8b3b276b4a8d67300e2b3b6edeaba1ba4a0e305a5c], retrying...." Apr 16 02:12:34 minikube dockerd[133]: time="2022-04-16T02:12:34.840448809Z" level=info msg="Removing stale sandbox 3b268137db8fb960bfb2e02d1cf1926fb05a0f59584ccaf4823ccc1e74344a99 (33456430bcd7f9a1077917f67dc190f3ac6b4f0e2529896e21fdfba1e54467af)" Apr 16 02:12:34 minikube dockerd[133]: time="2022-04-16T02:12:34.842945215Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4b665db764345e0693427d908a7953b9cf742b6ce314aa3664066b3a8231f111 fcf9a095f44a8cae85ae953faa34f5946b08987fde19f153c44f8ec0f853e7f4], retrying...." Apr 16 02:12:34 minikube dockerd[133]: time="2022-04-16T02:12:34.968117619Z" level=info msg="Removing stale sandbox 8cfedf105e882c8ceb4d6c239ed12803a859cc5572226c8672f1edb8d946c843 (60881d8c9924d17eeffb3f1d028c9add9246953a1dd0791030e0ac63f9fe2dbb)" Apr 16 02:12:34 minikube dockerd[133]: time="2022-04-16T02:12:34.973217568Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint dd42541febfd1405db0c3c62d55ec55df4b4b383e4985ab0a1796a26a033b167 9527dfda1a632a2060baede0cdc3c005a08fd09b93886fa544b414b4a87fc491], retrying...." Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.088461685Z" level=info msg="Removing stale sandbox c3e1e919aa9d12600eae40577af71153a5400ba1a0c6e501bfe044a6a8dc2091 (97c9771edd202226a4434e7a49a06429de9e45fe49e2bb262888bd36cb6c779d)" Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.090678098Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4b665db764345e0693427d908a7953b9cf742b6ce314aa3664066b3a8231f111 34ea9d522242f69894aba6c2cc1ec8479fdaec8e5a86c49114b52185e336054b], retrying...." Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.212447624Z" level=info msg="Removing stale sandbox 11416d9593ea494b10d3d29ad893c41afc9e3251faf0cdfce95bea721a05b779 (ccdd82eb99381559632483aecfba86708f9f01e35207c47d28aa0b17f4e6a4bf)" Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.241455920Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4b665db764345e0693427d908a7953b9cf742b6ce314aa3664066b3a8231f111 92416ad66a8ea9cef585810084528a0060facc92dcc057572675b414080b1af4], retrying...." Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.355188252Z" level=info msg="Removing stale sandbox 17317e935dd8f11a8b70be317ad4e585b37322392c512634d7d63e0922e174d0 (1e8e22e3d3e90e162e56eb932c447a77fb3385e178d9852a351afd5d44641707)" Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.357242214Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4b665db764345e0693427d908a7953b9cf742b6ce314aa3664066b3a8231f111 98f5843e163d89cd47739236fa5c0399ac7b5dc7b92d0ffb9eae6b2c9d2fd8f2], retrying...." Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.477422293Z" level=info msg="Removing stale sandbox b9a2febc5d0f19079020c2cebaf458a69a23bc86ca21463fa609cff913d52173 (8d1ca1ec0902badb0eee0b6118d069900891b97261ed521e514592cfbbe39e58)" Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.479553569Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4b665db764345e0693427d908a7953b9cf742b6ce314aa3664066b3a8231f111 4bd3e97c867aa385e8306071e347f1f6067e394c64e7c175182abadc3a27b2f8], retrying...." Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.607892157Z" level=info msg="Removing stale sandbox c584deb5047cc443f7f36769e8322dc12e6793bdfe47eb994c973facd5ac90d8 (1822d9c3c48fb3e346555689124d5a0dadd2d24d9af6878f254f95aea2b51113)" Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.610952043Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4b665db764345e0693427d908a7953b9cf742b6ce314aa3664066b3a8231f111 7285e6582cf02c945319a890c36771a7d31ce58144399ab55bdda0d1836bc8fc], retrying...." Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.643847984Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.702796817Z" level=info msg="Loading containers: done." Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.767227352Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.768007618Z" level=info msg="Daemon has completed initialization" Apr 16 02:12:35 minikube systemd[1]: Started Docker Application Container Engine. Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.808504595Z" level=info msg="API listen on [::]:2376" Apr 16 02:12:35 minikube dockerd[133]: time="2022-04-16T02:12:35.811416976Z" level=info msg="API listen on /var/run/docker.sock" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 88c10f743bbe3 a4ca41631cc7a 7 seconds ago Running coredns 1 8e1c979eaf079 4e7c279d10da4 e1482a24335a6 7 seconds ago Running kubernetes-dashboard 1 e1ce4f71baab2 a6eb88ee1e0d5 7801cfc6d5c07 7 seconds ago Running dashboard-metrics-scraper 1 2eb1c469e54f5 57477daa8423c 6e38f40d628db 8 seconds ago Running storage-provisioner 2 c6998058b53f8 9b30ecf1f51d1 9b7cc99821098 9 seconds ago Running kube-proxy 1 e26e91463cc62 f89cd77507497 b07520cd7ab76 16 seconds ago Running kube-controller-manager 44 f6147c81a2da9 eabe5d4d2524a 99a3486be4f28 16 seconds ago Running kube-scheduler 9 c162fd4026304 cbc2ac022e6b3 f40be0088a83e 16 seconds ago Running kube-apiserver 1 4cd4288f425eb 163df00f5d41e 25f8c7f3da61c 16 seconds ago Running etcd 9 5237255e293d1 8db28b80f344f 6e38f40d628db 12 days ago Exited storage-provisioner 1 1822d9c3c48fb bdb8e424a2bae 7801cfc6d5c07 12 days ago Exited dashboard-metrics-scraper 0 c4045e81341f6 8a0055a764c1c e1482a24335a6 12 days ago Exited kubernetes-dashboard 0 60881d8c9924d 83e4fb35a80fd a4ca41631cc7a 12 days ago Exited coredns 0 a62cfbd7ac5a5 656e3236273d3 9b7cc99821098 12 days ago Exited kube-proxy 0 8d1ca1ec0902b d7d7a7a16fa6a 99a3486be4f28 12 days ago Exited kube-scheduler 8 33456430bcd7f 49b1b184726cd b07520cd7ab76 12 days ago Exited kube-controller-manager 43 97c9771edd202 e4fb554eb84c8 f40be0088a83e 12 days ago Exited kube-apiserver 0 ccdd82eb99381 2cb7dd10da556 25f8c7f3da61c 12 days ago Exited etcd 8 1e8e22e3d3e90 * * ==> coredns [83e4fb35a80f] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [WARNING] plugin/health: Local health request to "http://:8080/health" took more than 1s: 1.183665172s [WARNING] plugin/health: Local health request to "http://:8080/health" took more than 1s: 1.298915994s [WARNING] plugin/health: Local health request to "http://:8080/health" took more than 1s: 1.174378517s [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s * * ==> coredns [88c10f743bbe] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 * * ==> describe nodes <== * Name: minikube Roles: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux Annotations: node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sun, 03 Apr 2022 06:30:53 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sat, 16 Apr 2022 02:13:00 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 16 Apr 2022 02:12:50 +0000 Sun, 03 Apr 2022 06:30:50 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 16 Apr 2022 02:12:50 +0000 Sun, 03 Apr 2022 06:30:50 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 16 Apr 2022 02:12:50 +0000 Sun, 03 Apr 2022 06:30:50 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 16 Apr 2022 02:12:50 +0000 Sun, 03 Apr 2022 06:31:04 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 8 ephemeral-storage: 61255492Ki hugepages-2Mi: 0 memory: 8150420Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 61255492Ki hugepages-2Mi: 0 memory: 8150420Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: b6a262faae404a5db719705fd34b5c8b Boot ID: da039092-7910-46b6-b493-ccd90a73e9b5 Kernel Version: 5.10.104-linuxkit OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.3 Kube-Proxy Version: v1.23.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-64897985d-8994j 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (2%!)(MISSING) 12d kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 12d kube-system kube-apiserver-minikube 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12d kube-system kube-controller-manager-minikube 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12d kube-system kube-proxy-4d2p9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12d kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12d kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12d kubernetes-dashboard dashboard-metrics-scraper-58549894f-2cl9w 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12d kubernetes-dashboard kubernetes-dashboard-ccd587f44-qp4sf 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12d Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (2%!)(MISSING) 170Mi (2%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 7s kube-proxy Normal Starting 17s kubelet Starting kubelet. Normal NodeHasSufficientMemory 17s (x8 over 17s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 17s (x8 over 17s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 17s (x7 over 17s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 17s kubelet Updated Node Allocatable limit across pods * * ==> dmesg <== * [Apr15 14:46] tsc: Unable to calibrate against PIT [ +0.095044] PCI: Fatal: No config space access function found [ +0.142002] pci 0000:00:1f.0: BAR 13: [io size 0x0080] has bogus alignment [ +2.235310] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A [ +0.000002] virtio-pci 0000:00:01.0: PCI INT A: no GSI [ +0.014145] virtio-pci 0000:00:05.0: can't derive routing for PCI INT A [ +0.000002] virtio-pci 0000:00:05.0: PCI INT A: no GSI [ +0.002846] virtio-pci 0000:00:06.0: can't derive routing for PCI INT A [ +0.000002] virtio-pci 0000:00:06.0: PCI INT A: no GSI [ +0.003013] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A [ +0.000002] virtio-pci 0000:00:07.0: PCI INT A: no GSI [ +0.002953] virtio-pci 0000:00:08.0: can't derive routing for PCI INT A [ +0.000001] virtio-pci 0000:00:08.0: PCI INT A: no GSI [ +0.008905] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds). [ +0.016202] the cryptoloop driver has been deprecated and will be removed in in Linux 5.16 [ +0.003793] lpc_ich 0000:00:1f.0: No MFD cells added [ +6.573213] grpcfuse: loading out-of-tree module taints kernel. [ +2.903375] clocksource: timekeeping watchdog on CPU7: hpet retried 2 times before success [Apr15 17:32] clocksource: timekeeping watchdog on CPU5: Marking clocksource 'tsc' as unstable because the skew is too large: [ +0.000945] clocksource: 'hpet' wd_now: 42999756 wd_last: 40af2e9b mask: ffffffff [ +0.000137] clocksource: 'tsc' cs_now: 14ed378f8e9a cs_last: 14eba66ed32c mask: ffffffffffffffff [ +0.002410] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'. [ +0.003984] clocksource: Checking clocksource tsc synchronization from CPU 1. [Apr15 17:34] hrtimer: interrupt took 19386472 ns * * ==> etcd [163df00f5d41] <== * {"level":"info","ts":"2022-04-16T02:12:46.387Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2022-04-16T02:12:46.389Z","caller":"etcdmain/etcd.go:115","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"} {"level":"info","ts":"2022-04-16T02:12:46.389Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2022-04-16T02:12:46.389Z","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-04-16T02:12:46.391Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2022-04-16T02:12:46.392Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.1","git-sha":"e8732fb5f","go-version":"go1.16.3","go-os":"linux","go-arch":"amd64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":true,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2022-04-16T02:12:46.404Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"10.931906ms"} {"level":"info","ts":"2022-04-16T02:12:46.556Z","caller":"etcdserver/server.go:508","msg":"recovered v2 store from snapshot","snapshot-index":20002,"snapshot-size":"7.9 kB"} {"level":"info","ts":"2022-04-16T02:12:46.556Z","caller":"etcdserver/server.go:518","msg":"recovered v3 backend from snapshot","backend-size-bytes":1765376,"backend-size":"1.8 MB","backend-size-in-use-bytes":778240,"backend-size-in-use":"778 kB"} {"level":"info","ts":"2022-04-16T02:12:46.749Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","commit-index":29816} {"level":"info","ts":"2022-04-16T02:12:46.750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2022-04-16T02:12:46.750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 4"} {"level":"info","ts":"2022-04-16T02:12:46.750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [aec36adc501070cc], term: 4, commit: 29816, applied: 20002, lastindex: 29816, lastterm: 4]"} {"level":"info","ts":"2022-04-16T02:12:46.750Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-04-16T02:12:46.750Z","caller":"membership/cluster.go:278","msg":"recovered/added member from store","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","recovered-remote-peer-id":"aec36adc501070cc","recovered-remote-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2022-04-16T02:12:46.750Z","caller":"membership/cluster.go:287","msg":"set cluster version from store","cluster-version":"3.5"} {"level":"warn","ts":"2022-04-16T02:12:46.752Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2022-04-16T02:12:46.754Z","caller":"mvcc/kvstore.go:345","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":23003} {"level":"info","ts":"2022-04-16T02:12:46.757Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":23221} {"level":"info","ts":"2022-04-16T02:12:46.760Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2022-04-16T02:12:46.762Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-id":"fa54960ea34d58be","cluster-version":"3.5"} {"level":"info","ts":"2022-04-16T02:12:46.763Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2022-04-16T02:12:46.765Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-04-16T02:12:46.765Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-04-16T02:12:46.765Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-04-16T02:12:46.765Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2022-04-16T02:12:46.765Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2022-04-16T02:12:46.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"} {"level":"info","ts":"2022-04-16T02:12:46.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"} {"level":"info","ts":"2022-04-16T02:12:46.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"} {"level":"info","ts":"2022-04-16T02:12:46.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"} {"level":"info","ts":"2022-04-16T02:12:46.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"} {"level":"info","ts":"2022-04-16T02:12:46.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"} {"level":"info","ts":"2022-04-16T02:12:46.952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"} {"level":"info","ts":"2022-04-16T02:12:47.735Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2022-04-16T02:12:47.735Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-04-16T02:12:47.735Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-04-16T02:12:47.736Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-04-16T02:12:47.736Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-04-16T02:12:47.739Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2022-04-16T02:12:47.739Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} * * ==> etcd [2cb7dd10da55] <== * {"level":"info","ts":"2022-04-03T14:28:42.538Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":19229} {"level":"info","ts":"2022-04-03T14:28:42.539Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":19229,"took":"291.797ยตs"} {"level":"info","ts":"2022-04-03T14:33:42.550Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":19439} {"level":"info","ts":"2022-04-03T14:33:42.551Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":19439,"took":"366.038ยตs"} {"level":"info","ts":"2022-04-03T14:38:42.562Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":19648} {"level":"info","ts":"2022-04-03T14:38:42.563Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":19648,"took":"285.022ยตs"} {"level":"info","ts":"2022-04-03T14:43:42.575Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":19858} {"level":"info","ts":"2022-04-03T14:43:42.576Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":19858,"took":"367.504ยตs"} {"level":"info","ts":"2022-04-03T14:48:42.569Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":20067} {"level":"info","ts":"2022-04-03T14:48:42.570Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":20067,"took":"366.317ยตs"} {"level":"info","ts":"2022-04-03T14:53:42.560Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":20278} {"level":"info","ts":"2022-04-03T14:53:42.561Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":20278,"took":"784.318ยตs"} {"level":"info","ts":"2022-04-03T14:58:42.568Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":20487} {"level":"info","ts":"2022-04-03T14:58:42.569Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":20487,"took":"303.739ยตs"} {"level":"info","ts":"2022-04-03T15:03:42.780Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":20696} {"level":"info","ts":"2022-04-03T15:03:42.781Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":20696,"took":"575.772ยตs"} {"level":"warn","ts":"2022-04-03T15:03:42.791Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"135.009268ms","expected-duration":"100ms","prefix":"","request":"header: compaction: ","response":"size:6"} {"level":"info","ts":"2022-04-03T15:03:42.791Z","caller":"traceutil/trace.go:171","msg":"trace[1442229373] compact","detail":"{revision:20696; response_revision:20906; }","duration":"209.872694ms","start":"2022-04-03T15:03:42.581Z","end":"2022-04-03T15:03:42.791Z","steps":["trace[1442229373] 'process raft request' (duration: 68.666688ms)","trace[1442229373] 'check and update compact revision' (duration: 130.148385ms)"],"step_count":2} {"level":"info","ts":"2022-04-03T15:05:15.729Z","caller":"traceutil/trace.go:171","msg":"trace[755079815] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:20972; }","duration":"128.443838ms","start":"2022-04-03T15:05:15.601Z","end":"2022-04-03T15:05:15.729Z","steps":["trace[755079815] 'agreement among raft nodes before linearized reading' (duration: 128.205889ms)"],"step_count":1} {"level":"info","ts":"2022-04-03T15:08:42.800Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":20906} {"level":"info","ts":"2022-04-03T15:08:42.801Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":20906,"took":"360.451ยตs"} {"level":"info","ts":"2022-04-03T15:10:30.092Z","caller":"traceutil/trace.go:171","msg":"trace[1443363130] linearizableReadLoop","detail":"{readStateIndex:27195; appliedIndex:27195; }","duration":"210.824001ms","start":"2022-04-03T15:10:29.881Z","end":"2022-04-03T15:10:30.092Z","steps":["trace[1443363130] 'read index received' (duration: 210.789011ms)","trace[1443363130] 'applied index is now lower than readState.Index' (duration: 15.504ยตs)"],"step_count":2} {"level":"warn","ts":"2022-04-03T15:10:30.279Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"398.661702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:6"} {"level":"info","ts":"2022-04-03T15:10:30.279Z","caller":"traceutil/trace.go:171","msg":"trace[2011302497] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:21191; }","duration":"398.81221ms","start":"2022-04-03T15:10:29.880Z","end":"2022-04-03T15:10:30.279Z","steps":["trace[2011302497] 'agreement among raft nodes before linearized reading' (duration: 398.298876ms)"],"step_count":1} {"level":"warn","ts":"2022-04-03T15:10:30.279Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-03T15:10:29.880Z","time spent":"399.009232ms","remote":"127.0.0.1:52880","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":30,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "} {"level":"info","ts":"2022-04-03T15:13:42.989Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":21115} {"level":"info","ts":"2022-04-03T15:13:42.989Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":21115,"took":"302.483ยตs"} {"level":"info","ts":"2022-04-03T15:18:41.037Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":21325} {"level":"info","ts":"2022-04-03T15:18:41.037Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":21325,"took":"406.406ยตs"} {"level":"info","ts":"2022-04-03T15:23:41.044Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":21534} {"level":"info","ts":"2022-04-03T15:23:41.044Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":21534,"took":"387.549ยตs"} {"level":"info","ts":"2022-04-03T15:28:41.052Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":21744} {"level":"info","ts":"2022-04-03T15:28:41.052Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":21744,"took":"419.118ยตs"} {"level":"info","ts":"2022-04-03T15:30:01.923Z","caller":"traceutil/trace.go:171","msg":"trace[371549018] linearizableReadLoop","detail":"{readStateIndex:28253; appliedIndex:28253; }","duration":"467.202116ms","start":"2022-04-03T15:30:01.456Z","end":"2022-04-03T15:30:01.923Z","steps":["trace[371549018] 'read index received' (duration: 467.126548ms)","trace[371549018] 'applied index is now lower than readState.Index' (duration: 21.93ยตs)"],"step_count":2} {"level":"warn","ts":"2022-04-03T15:30:01.925Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"469.101101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:6"} {"level":"info","ts":"2022-04-03T15:30:01.925Z","caller":"traceutil/trace.go:171","msg":"trace[1234850896] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:22010; }","duration":"469.238338ms","start":"2022-04-03T15:30:01.456Z","end":"2022-04-03T15:30:01.925Z","steps":["trace[1234850896] 'agreement among raft nodes before linearized reading' (duration: 467.585894ms)"],"step_count":1} {"level":"warn","ts":"2022-04-03T15:30:01.926Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-03T15:30:01.456Z","time spent":"470.266332ms","remote":"127.0.0.1:52876","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":30,"request content":"key:\"/registry/health\" "} {"level":"info","ts":"2022-04-03T15:30:17.625Z","caller":"traceutil/trace.go:171","msg":"trace[1619078397] linearizableReadLoop","detail":"{readStateIndex:28268; appliedIndex:28268; }","duration":"168.052713ms","start":"2022-04-03T15:30:17.457Z","end":"2022-04-03T15:30:17.625Z","steps":["trace[1619078397] 'read index received' (duration: 167.964853ms)","trace[1619078397] 'applied index is now lower than readState.Index' (duration: 49.308ยตs)"],"step_count":2} {"level":"warn","ts":"2022-04-03T15:30:17.625Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"168.647273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:6"} {"level":"info","ts":"2022-04-03T15:30:17.625Z","caller":"traceutil/trace.go:171","msg":"trace[911978158] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:22022; }","duration":"168.760625ms","start":"2022-04-03T15:30:17.456Z","end":"2022-04-03T15:30:17.625Z","steps":["trace[911978158] 'agreement among raft nodes before linearized reading' (duration: 168.524491ms)"],"step_count":1} {"level":"info","ts":"2022-04-03T15:33:41.055Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":21954} {"level":"info","ts":"2022-04-03T15:33:41.056Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":21954,"took":"394.464ยตs"} {"level":"info","ts":"2022-04-03T15:38:41.063Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":22163} {"level":"info","ts":"2022-04-03T15:38:41.063Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":22163,"took":"387.619ยตs"} {"level":"info","ts":"2022-04-03T15:43:41.070Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":22373} {"level":"info","ts":"2022-04-03T15:43:41.071Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":22373,"took":"638.978ยตs"} {"level":"info","ts":"2022-04-03T15:48:41.072Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":22583} {"level":"info","ts":"2022-04-03T15:48:41.073Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":22583,"took":"368.483ยตs"} {"level":"info","ts":"2022-04-03T15:53:41.079Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":22792} {"level":"info","ts":"2022-04-03T15:53:41.079Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":22792,"took":"357.378ยตs"} {"level":"info","ts":"2022-04-03T15:58:41.089Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":23003} {"level":"info","ts":"2022-04-03T15:58:41.090Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":23003,"took":"457.25ยตs"} {"level":"info","ts":"2022-04-03T15:58:52.690Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"} {"level":"info","ts":"2022-04-03T15:58:52.692Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"minikube","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]} WARNING: 2022/04/03 15:58:52 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... WARNING: 2022/04/03 15:58:53 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting... {"level":"info","ts":"2022-04-03T15:58:53.027Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"} {"level":"info","ts":"2022-04-03T15:58:53.030Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-04-03T15:58:53.032Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-04-03T15:58:53.032Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"minikube","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]} * * ==> kernel <== * 02:13:02 up 11:26, 0 users, load average: 0.57, 0.14, 0.05 Linux minikube 5.10.104-linuxkit #1 SMP Wed Mar 9 19:05:23 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [cbc2ac022e6b] <== * W0416 02:12:49.258015 1 genericapiserver.go:538] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0416 02:12:49.263696 1 genericapiserver.go:538] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. W0416 02:12:49.263741 1 genericapiserver.go:538] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0416 02:12:49.265240 1 genericapiserver.go:538] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. W0416 02:12:49.265281 1 genericapiserver.go:538] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0416 02:12:49.269461 1 genericapiserver.go:538] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0416 02:12:49.274042 1 genericapiserver.go:538] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W0416 02:12:49.278321 1 genericapiserver.go:538] Skipping API apps/v1beta2 because it has no resources. W0416 02:12:49.278414 1 genericapiserver.go:538] Skipping API apps/v1beta1 because it has no resources. W0416 02:12:49.279997 1 genericapiserver.go:538] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I0416 02:12:49.283740 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0416 02:12:49.283791 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W0416 02:12:49.341078 1 genericapiserver.go:538] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0416 02:12:50.307446 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0416 02:12:50.307633 1 secure_serving.go:266] Serving securely on [::]:8443 I0416 02:12:50.307633 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0416 02:12:50.307977 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0416 02:12:50.308098 1 apf_controller.go:317] Starting API Priority and Fairness config controller I0416 02:12:50.308284 1 controller.go:83] Starting OpenAPI AggregationController I0416 02:12:50.310284 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0416 02:12:50.310414 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0416 02:12:50.310569 1 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0416 02:12:50.312263 1 autoregister_controller.go:141] Starting autoregister controller I0416 02:12:50.312294 1 cache.go:32] Waiting for caches to sync for autoregister controller I0416 02:12:50.312588 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0416 02:12:50.312760 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0416 02:12:50.315096 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0416 02:12:50.315362 1 available_controller.go:491] Starting AvailableConditionController I0416 02:12:50.315384 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0416 02:12:50.307932 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0416 02:12:50.320401 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0416 02:12:50.307939 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0416 02:12:50.321018 1 controller.go:85] Starting OpenAPI controller I0416 02:12:50.321050 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0416 02:12:50.321074 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0416 02:12:50.321129 1 crd_finalizer.go:266] Starting CRDFinalizer I0416 02:12:50.321151 1 naming_controller.go:291] Starting NamingConditionController I0416 02:12:50.321282 1 establishing_controller.go:76] Starting EstablishingController I0416 02:12:50.321430 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0416 02:12:50.334770 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0416 02:12:50.451857 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0416 02:12:50.531233 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0416 02:12:50.531365 1 shared_informer.go:247] Caches are synced for crd-autoregister I0416 02:12:50.531485 1 cache.go:39] Caches are synced for autoregister controller I0416 02:12:50.532837 1 cache.go:39] Caches are synced for AvailableConditionController controller I0416 02:12:50.533210 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0416 02:12:50.533410 1 apf_controller.go:322] Running API Priority and Fairness config worker I0416 02:12:50.545774 1 shared_informer.go:247] Caches are synced for node_authorizer I0416 02:12:51.332259 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0416 02:12:51.332381 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0416 02:12:51.345027 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0416 02:12:53.034841 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0416 02:12:53.051800 1 controller.go:611] quota admission added evaluator for: deployments.apps I0416 02:12:53.360408 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0416 02:12:53.537119 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0416 02:12:53.554149 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0416 02:12:54.539881 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io E0416 02:12:56.735553 1 storage.go:441] Address {172.17.0.4 0xc007a68f20 0xc004fecbd0} isn't valid (pod ip(s) doesn't match endpoint ip, skipping: [] vs 172.17.0.4 (kubernetes-dashboard/dashboard-metrics-scraper-58549894f-2cl9w)) E0416 02:12:56.735614 1 storage.go:451] Failed to find a valid address, skipping subset: &{[{172.17.0.4 0xc007a68f20 0xc004fecbd0}] [] [{ 8000 TCP }]} I0416 02:12:57.162422 1 controller.go:611] quota admission added evaluator for: endpoints * * ==> kube-apiserver [e4fb554eb84c] <== * W0403 15:58:52.727043 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.735121 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.735256 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.735452 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.735664 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.751180 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.751291 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.751425 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.751784 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.752193 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.752311 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.752391 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.752501 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.752594 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... E0403 15:58:52.752631 1 controller.go:189] Unable to remove endpoints from kubernetes service: Get "https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:8443: connect: connection refused W0403 15:58:52.754957 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.829114 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.829211 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.829767 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.830584 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.830782 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.830999 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.831689 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.831810 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.832089 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.832218 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.832807 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.833020 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.833604 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.836236 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.836544 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.837590 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.837720 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.838811 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.838918 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.839505 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.839618 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.839892 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.840458 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.840907 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.840907 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.841082 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.841095 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.841187 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.841949 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.842002 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.842243 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.842477 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.842634 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.842692 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.842719 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.842951 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.843097 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.843071 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.843304 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.843562 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.844061 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.844188 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.845451 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0403 15:58:52.842811 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 0 }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... * * ==> kube-controller-manager [49b1b184726c] <== * I0403 06:31:09.599137 1 shared_informer.go:240] Waiting for caches to sync for resource quota W0403 06:31:09.610738 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0403 06:31:09.614215 1 shared_informer.go:247] Caches are synced for namespace I0403 06:31:09.617618 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0403 06:31:09.633928 1 shared_informer.go:247] Caches are synced for expand I0403 06:31:09.642364 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0403 06:31:09.654713 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0403 06:31:09.658163 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0403 06:31:09.658212 1 shared_informer.go:247] Caches are synced for service account I0403 06:31:09.658226 1 shared_informer.go:247] Caches are synced for crt configmap I0403 06:31:09.658281 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0403 06:31:09.658306 1 shared_informer.go:247] Caches are synced for ReplicationController I0403 06:31:09.658384 1 shared_informer.go:247] Caches are synced for PV protection I0403 06:31:09.658389 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0403 06:31:09.658396 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0403 06:31:09.659571 1 shared_informer.go:247] Caches are synced for ephemeral I0403 06:31:09.659616 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0403 06:31:09.659635 1 shared_informer.go:247] Caches are synced for endpoint I0403 06:31:09.659638 1 shared_informer.go:247] Caches are synced for TTL I0403 06:31:09.660367 1 shared_informer.go:247] Caches are synced for job I0403 06:31:09.686958 1 shared_informer.go:247] Caches are synced for node I0403 06:31:09.687030 1 range_allocator.go:173] Starting range CIDR allocator I0403 06:31:09.687036 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0403 06:31:09.687042 1 shared_informer.go:247] Caches are synced for cidrallocator I0403 06:31:09.691006 1 range_allocator.go:374] Set node minikube PodCIDR to [10.244.0.0/24] I0403 06:31:09.692011 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0403 06:31:09.698861 1 shared_informer.go:247] Caches are synced for PVC protection I0403 06:31:09.707503 1 shared_informer.go:247] Caches are synced for TTL after finished I0403 06:31:09.707736 1 shared_informer.go:247] Caches are synced for deployment I0403 06:31:09.707969 1 shared_informer.go:247] Caches are synced for endpoint_slice I0403 06:31:09.707797 1 shared_informer.go:247] Caches are synced for HPA I0403 06:31:09.708665 1 shared_informer.go:247] Caches are synced for GC I0403 06:31:09.708693 1 shared_informer.go:247] Caches are synced for cronjob I0403 06:31:09.709074 1 shared_informer.go:247] Caches are synced for persistent volume I0403 06:31:09.713718 1 shared_informer.go:247] Caches are synced for ReplicaSet I0403 06:31:09.795580 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-58549894f to 1" I0403 06:31:09.797573 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-ccd587f44 to 1" I0403 06:31:09.797656 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 1" I0403 06:31:09.807204 1 shared_informer.go:247] Caches are synced for stateful set I0403 06:31:09.808254 1 shared_informer.go:247] Caches are synced for daemon sets I0403 06:31:09.808796 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-8994j" I0403 06:31:09.816195 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-ccd587f44-qp4sf" I0403 06:31:09.816249 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-58549894f-2cl9w" I0403 06:31:09.887284 1 shared_informer.go:247] Caches are synced for disruption I0403 06:31:09.887499 1 disruption.go:371] Sending events to api server. I0403 06:31:09.887630 1 shared_informer.go:247] Caches are synced for taint I0403 06:31:09.887796 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: W0403 06:31:09.887916 1 node_lifecycle_controller.go:1012] Missing timestamp for Node minikube. Assuming now as a timestamp. I0403 06:31:09.887947 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0403 06:31:09.888280 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0403 06:31:09.888664 1 shared_informer.go:247] Caches are synced for resource quota I0403 06:31:09.888560 1 event.go:294] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0403 06:31:09.900422 1 shared_informer.go:247] Caches are synced for resource quota I0403 06:31:09.906510 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4d2p9" I0403 06:31:09.971048 1 shared_informer.go:247] Caches are synced for attach detach I0403 06:31:10.318553 1 shared_informer.go:247] Caches are synced for garbage collector I0403 06:31:10.356953 1 shared_informer.go:247] Caches are synced for garbage collector I0403 06:31:10.357030 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage E0403 12:20:42.914995 1 resource_quota_controller.go:413] failed to discover resources: Unauthorized W0403 12:20:42.915275 1 garbagecollector.go:709] failed to discover preferred resources: Unauthorized * * ==> kube-controller-manager [f89cd7750749] <== * I0416 02:12:47.684723 1 serving.go:348] Generated self-signed cert in-memory I0416 02:12:48.039340 1 controllermanager.go:196] Version: v1.23.3 I0416 02:12:48.045012 1 secure_serving.go:200] Serving securely on 127.0.0.1:10257 I0416 02:12:48.046159 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0416 02:12:48.046214 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0416 02:12:48.046254 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0416 02:12:52.542460 1 shared_informer.go:240] Waiting for caches to sync for tokens I0416 02:12:52.640242 1 controllermanager.go:605] Started "garbagecollector" W0416 02:12:52.640284 1 core.go:226] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes. W0416 02:12:52.640327 1 controllermanager.go:583] Skipping "route" I0416 02:12:52.641504 1 garbagecollector.go:146] Starting garbage collector controller I0416 02:12:52.641533 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0416 02:12:52.641611 1 graph_builder.go:289] GraphBuilder running I0416 02:12:52.642902 1 shared_informer.go:247] Caches are synced for tokens I0416 02:12:52.651233 1 controllermanager.go:605] Started "ttl-after-finished" I0416 02:12:52.652521 1 ttlafterfinished_controller.go:109] Starting TTL after finished controller I0416 02:12:52.652546 1 shared_informer.go:240] Waiting for caches to sync for TTL after finished I0416 02:12:52.657216 1 controllermanager.go:605] Started "endpoint" I0416 02:12:52.657771 1 endpoints_controller.go:193] Starting endpoint controller I0416 02:12:52.657816 1 shared_informer.go:240] Waiting for caches to sync for endpoint I0416 02:12:52.738523 1 controllermanager.go:605] Started "replicationcontroller" I0416 02:12:52.738955 1 replica_set.go:186] Starting replicationcontroller controller I0416 02:12:52.739001 1 shared_informer.go:240] Waiting for caches to sync for ReplicationController I0416 02:12:52.744715 1 controllermanager.go:605] Started "serviceaccount" I0416 02:12:52.744937 1 serviceaccounts_controller.go:117] Starting service account controller I0416 02:12:52.744982 1 shared_informer.go:240] Waiting for caches to sync for service account I0416 02:12:52.752280 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving" I0416 02:12:52.752305 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I0416 02:12:52.752331 1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key" I0416 02:12:52.753942 1 controllermanager.go:605] Started "csrsigning" I0416 02:12:52.753982 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client" I0416 02:12:52.754036 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client I0416 02:12:52.754211 1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key" I0416 02:12:52.754308 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client" I0416 02:12:52.754335 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I0416 02:12:52.754521 1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown" I0416 02:12:52.754543 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I0416 02:12:52.754729 1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key" I0416 02:12:52.755041 1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key" I0416 02:12:52.835947 1 node_ipam_controller.go:91] Sending events to api server. * * ==> kube-proxy [656e3236273d] <== * I0403 06:31:10.971896 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0403 06:31:10.971978 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0403 06:31:10.972031 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0403 06:31:11.018238 1 server_others.go:206] "Using iptables Proxier" I0403 06:31:11.018282 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0403 06:31:11.018289 1 server_others.go:214] "Creating dualStackProxier for iptables" I0403 06:31:11.018304 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0403 06:31:11.020253 1 server.go:656] "Version info" version="v1.23.3" I0403 06:31:11.022753 1 config.go:317] "Starting service config controller" I0403 06:31:11.022905 1 config.go:226] "Starting endpoint slice config controller" I0403 06:31:11.024285 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0403 06:31:11.024285 1 shared_informer.go:240] Waiting for caches to sync for service config I0403 06:31:11.124745 1 shared_informer.go:247] Caches are synced for service config I0403 06:31:11.124811 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-proxy [9b30ecf1f51d] <== * I0416 02:12:54.135173 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0416 02:12:54.135364 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0416 02:12:54.135414 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0416 02:12:54.435140 1 server_others.go:206] "Using iptables Proxier" I0416 02:12:54.435348 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0416 02:12:54.435544 1 server_others.go:214] "Creating dualStackProxier for iptables" I0416 02:12:54.435582 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0416 02:12:54.439806 1 server.go:656] "Version info" version="v1.23.3" I0416 02:12:54.444956 1 config.go:226] "Starting endpoint slice config controller" I0416 02:12:54.445017 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0416 02:12:54.445201 1 config.go:317] "Starting service config controller" I0416 02:12:54.445217 1 shared_informer.go:240] Waiting for caches to sync for service config I0416 02:12:54.546187 1 shared_informer.go:247] Caches are synced for endpoint slice config I0416 02:12:54.546365 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [d7d7a7a16fa6] <== * I0403 06:30:51.715715 1 serving.go:348] Generated self-signed cert in-memory W0403 06:30:53.604042 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0403 06:30:53.604087 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0403 06:30:53.604101 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous. W0403 06:30:53.604123 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0403 06:30:53.620310 1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.3" I0403 06:30:53.621727 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 I0403 06:30:53.621760 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0403 06:30:53.621736 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0403 06:30:53.621942 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file W0403 06:30:53.687038 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0403 06:30:53.687423 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0403 06:30:53.689003 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0403 06:30:53.689351 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0403 06:30:53.689422 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0403 06:30:53.689638 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0403 06:30:53.695141 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0403 06:30:53.695212 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0403 06:30:53.695262 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0403 06:30:53.695311 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0403 06:30:53.695381 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0403 06:30:53.695394 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0403 06:30:53.695430 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0403 06:30:53.695445 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0403 06:30:53.689223 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0403 06:30:53.695475 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0403 06:30:53.689295 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0403 06:30:53.695500 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0403 06:30:53.695810 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0403 06:30:53.695864 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0403 06:30:53.695836 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0403 06:30:53.695880 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0403 06:30:53.696554 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0403 06:30:53.696970 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0403 06:30:53.697143 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0403 06:30:53.697264 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0403 06:30:53.697379 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0403 06:30:53.697270 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0403 06:30:53.699335 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0403 06:30:53.699488 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0403 06:30:54.548349 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0403 06:30:54.548404 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0403 06:30:54.572341 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0403 06:30:54.572391 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0403 06:30:54.604667 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0403 06:30:54.604711 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0403 06:30:54.687208 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0403 06:30:54.687274 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0403 06:30:54.788493 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0403 06:30:54.788561 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0403 06:30:54.843127 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0403 06:30:54.843173 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0403 06:30:54.888277 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0403 06:30:54.888342 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope I0403 06:30:57.823891 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0403 15:58:52.743564 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0403 15:58:52.753696 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" I0403 15:58:52.754050 1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259 * * ==> kube-scheduler [eabe5d4d2524] <== * I0416 02:12:47.153155 1 serving.go:348] Generated self-signed cert in-memory W0416 02:12:50.351329 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0416 02:12:50.352279 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0416 02:12:50.352802 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous. W0416 02:12:50.353080 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0416 02:12:50.539133 1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.3" I0416 02:12:50.541542 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 I0416 02:12:50.541593 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0416 02:12:50.542810 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0416 02:12:50.541776 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0416 02:12:50.644551 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Sat 2022-04-16 02:12:33 UTC, end at Sat 2022-04-16 02:13:03 UTC. -- Apr 16 02:12:47 minikube kubelet[982]: E0416 02:12:47.864513 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:47 minikube kubelet[982]: E0416 02:12:47.964744 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:48 minikube kubelet[982]: E0416 02:12:48.065525 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:48 minikube kubelet[982]: E0416 02:12:48.234656 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:48 minikube kubelet[982]: E0416 02:12:48.335824 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:48 minikube kubelet[982]: E0416 02:12:48.436382 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:48 minikube kubelet[982]: E0416 02:12:48.537216 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:48 minikube kubelet[982]: E0416 02:12:48.638030 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:48 minikube kubelet[982]: E0416 02:12:48.739278 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:48 minikube kubelet[982]: E0416 02:12:48.841944 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:48 minikube kubelet[982]: E0416 02:12:48.942841 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:49 minikube kubelet[982]: E0416 02:12:49.043273 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:49 minikube kubelet[982]: E0416 02:12:49.144336 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:49 minikube kubelet[982]: E0416 02:12:49.244949 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:49 minikube kubelet[982]: E0416 02:12:49.345181 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:49 minikube kubelet[982]: E0416 02:12:49.445986 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:49 minikube kubelet[982]: E0416 02:12:49.547194 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:49 minikube kubelet[982]: E0416 02:12:49.648357 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:49 minikube kubelet[982]: E0416 02:12:49.749369 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:49 minikube kubelet[982]: E0416 02:12:49.849906 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:49 minikube kubelet[982]: E0416 02:12:49.950808 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:50 minikube kubelet[982]: E0416 02:12:50.051774 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:50 minikube kubelet[982]: E0416 02:12:50.152886 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:50 minikube kubelet[982]: E0416 02:12:50.253478 982 kubelet.go:2422] "Error getting node" err="node \"minikube\" not found" Apr 16 02:12:50 minikube kubelet[982]: I0416 02:12:50.354154 982 kuberuntime_manager.go:1098] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Apr 16 02:12:50 minikube kubelet[982]: I0416 02:12:50.354607 982 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Apr 16 02:12:50 minikube kubelet[982]: I0416 02:12:50.354909 982 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Apr 16 02:12:50 minikube kubelet[982]: I0416 02:12:50.545202 982 kubelet_node_status.go:108] "Node was previously registered" node="minikube" Apr 16 02:12:50 minikube kubelet[982]: I0416 02:12:50.545362 982 kubelet_node_status.go:73] "Successfully registered node" node="minikube" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.235761 982 apiserver.go:52] "Watching apiserver" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.246318 982 topology_manager.go:200] "Topology Admit Handler" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.246540 982 topology_manager.go:200] "Topology Admit Handler" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.250178 982 topology_manager.go:200] "Topology Admit Handler" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.250889 982 topology_manager.go:200] "Topology Admit Handler" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.251425 982 topology_manager.go:200] "Topology Admit Handler" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.363147 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxnr\" (UniqueName: \"kubernetes.io/projected/68d40af5-0416-49ca-b3e6-b405dfc94cb3-kube-api-access-bcxnr\") pod \"dashboard-metrics-scraper-58549894f-2cl9w\" (UID: \"68d40af5-0416-49ca-b3e6-b405dfc94cb3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-58549894f-2cl9w" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.363254 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zzcb\" (UniqueName: \"kubernetes.io/projected/6b42dd72-545f-4b4b-b5ee-2010af1f9b07-kube-api-access-8zzcb\") pod \"coredns-64897985d-8994j\" (UID: \"6b42dd72-545f-4b4b-b5ee-2010af1f9b07\") " pod="kube-system/coredns-64897985d-8994j" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.363313 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj8pp\" (UniqueName: \"kubernetes.io/projected/c737542c-6d23-45ef-bc2c-adb5b899f700-kube-api-access-zj8pp\") pod \"storage-provisioner\" (UID: \"c737542c-6d23-45ef-bc2c-adb5b899f700\") " pod="kube-system/storage-provisioner" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.363474 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f179e749-94c0-46ab-a66b-1fd5cd81abcb-lib-modules\") pod \"kube-proxy-4d2p9\" (UID: \"f179e749-94c0-46ab-a66b-1fd5cd81abcb\") " pod="kube-system/kube-proxy-4d2p9" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.363678 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpl6t\" (UniqueName: \"kubernetes.io/projected/f179e749-94c0-46ab-a66b-1fd5cd81abcb-kube-api-access-fpl6t\") pod \"kube-proxy-4d2p9\" (UID: \"f179e749-94c0-46ab-a66b-1fd5cd81abcb\") " pod="kube-system/kube-proxy-4d2p9" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.363877 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c737542c-6d23-45ef-bc2c-adb5b899f700-tmp\") pod \"storage-provisioner\" (UID: \"c737542c-6d23-45ef-bc2c-adb5b899f700\") " pod="kube-system/storage-provisioner" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.363957 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/68d40af5-0416-49ca-b3e6-b405dfc94cb3-tmp-volume\") pod \"dashboard-metrics-scraper-58549894f-2cl9w\" (UID: \"68d40af5-0416-49ca-b3e6-b405dfc94cb3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-58549894f-2cl9w" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.364024 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7svz\" (UniqueName: \"kubernetes.io/projected/742cee19-51d0-4845-ba7f-61fdfdddf1c5-kube-api-access-c7svz\") pod \"kubernetes-dashboard-ccd587f44-qp4sf\" (UID: \"742cee19-51d0-4845-ba7f-61fdfdddf1c5\") " pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-qp4sf" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.364087 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f179e749-94c0-46ab-a66b-1fd5cd81abcb-kube-proxy\") pod \"kube-proxy-4d2p9\" (UID: \"f179e749-94c0-46ab-a66b-1fd5cd81abcb\") " pod="kube-system/kube-proxy-4d2p9" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.364149 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/742cee19-51d0-4845-ba7f-61fdfdddf1c5-tmp-volume\") pod \"kubernetes-dashboard-ccd587f44-qp4sf\" (UID: \"742cee19-51d0-4845-ba7f-61fdfdddf1c5\") " pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-qp4sf" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.364195 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b42dd72-545f-4b4b-b5ee-2010af1f9b07-config-volume\") pod \"coredns-64897985d-8994j\" (UID: \"6b42dd72-545f-4b4b-b5ee-2010af1f9b07\") " pod="kube-system/coredns-64897985d-8994j" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.364246 982 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f179e749-94c0-46ab-a66b-1fd5cd81abcb-xtables-lock\") pod \"kube-proxy-4d2p9\" (UID: \"f179e749-94c0-46ab-a66b-1fd5cd81abcb\") " pod="kube-system/kube-proxy-4d2p9" Apr 16 02:12:51 minikube kubelet[982]: I0416 02:12:51.364284 982 reconciler.go:157] "Reconciler: start to sync state" Apr 16 02:12:52 minikube kubelet[982]: I0416 02:12:52.203683 982 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e26e91463cc62a95d10873bb3c26c7fa56cbcc8ed6123dcf5ed0204a6f1aab1d" Apr 16 02:12:52 minikube kubelet[982]: I0416 02:12:52.637393 982 request.go:665] Waited for 1.161901582s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/coredns/token Apr 16 02:12:53 minikube kubelet[982]: I0416 02:12:53.961914 982 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-2cl9w through plugin: invalid network status for" Apr 16 02:12:54 minikube kubelet[982]: I0416 02:12:54.034041 982 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2eb1c469e54f595c9e166ab94ba03c3e46afb98142770c5e3d0f36381fb58e4c" Apr 16 02:12:54 minikube kubelet[982]: I0416 02:12:54.158419 982 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-ccd587f44-qp4sf through plugin: invalid network status for" Apr 16 02:12:54 minikube kubelet[982]: I0416 02:12:54.451132 982 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-8994j through plugin: invalid network status for" Apr 16 02:12:54 minikube kubelet[982]: I0416 02:12:54.459193 982 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8e1c979eaf07953e5215b39efe076138941e4f0238082ee144e47e7454940013" Apr 16 02:12:54 minikube kubelet[982]: I0416 02:12:54.532227 982 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-ccd587f44-qp4sf through plugin: invalid network status for" Apr 16 02:12:55 minikube kubelet[982]: I0416 02:12:55.054322 982 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e1ce4f71baab28b5289f9fed8c94c365c1b867a236a2804a016f36a32645e494" Apr 16 02:12:56 minikube kubelet[982]: I0416 02:12:56.143403 982 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-8994j through plugin: invalid network status for" Apr 16 02:12:56 minikube kubelet[982]: I0416 02:12:56.234142 982 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-ccd587f44-qp4sf through plugin: invalid network status for" Apr 16 02:12:56 minikube kubelet[982]: I0416 02:12:56.358482 982 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-2cl9w through plugin: invalid network status for" * * ==> kubernetes-dashboard [4e7c279d10da] <== * 2022/04/16 02:12:55 Using namespace: kubernetes-dashboard 2022/04/16 02:12:55 Using in-cluster config to connect to apiserver 2022/04/16 02:12:55 Using secret token for csrf signing 2022/04/16 02:12:55 Initializing csrf token from kubernetes-dashboard-csrf secret 2022/04/16 02:12:56 Successful initial request to the apiserver, version: v1.23.3 2022/04/16 02:12:56 Generating JWE encryption key 2022/04/16 02:12:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting 2022/04/16 02:12:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard 2022/04/16 02:12:56 Initializing JWE encryption key from synchronized object 2022/04/16 02:12:56 Creating in-cluster Sidecar client 2022/04/16 02:12:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. 2022/04/16 02:12:56 Serving insecurely on HTTP port: 9090 2022/04/16 02:12:55 Starting overwatch * * ==> kubernetes-dashboard [8a0055a764c1] <== * 2022/04/03 06:31:13 Using namespace: kubernetes-dashboard 2022/04/03 06:31:13 Using in-cluster config to connect to apiserver 2022/04/03 06:31:13 Using secret token for csrf signing 2022/04/03 06:31:13 Initializing csrf token from kubernetes-dashboard-csrf secret 2022/04/03 06:31:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf 2022/04/03 06:31:13 Successful initial request to the apiserver, version: v1.23.3 2022/04/03 06:31:13 Generating JWE encryption key 2022/04/03 06:31:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting 2022/04/03 06:31:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard 2022/04/03 06:31:13 Initializing JWE encryption key from synchronized object 2022/04/03 06:31:13 Creating in-cluster Sidecar client 2022/04/03 06:31:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. 2022/04/03 06:31:13 Serving insecurely on HTTP port: 9090 2022/04/03 06:31:43 Successful request to sidecar 2022/04/03 06:31:13 Starting overwatch * * ==> storage-provisioner [57477daa8423] <== * I0416 02:12:54.539975 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... * * ==> storage-provisioner [8db28b80f344] <== * I0403 06:31:41.675643 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0403 06:31:41.692391 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0403 06:31:41.693386 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0403 06:31:41.715352 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0403 06:31:41.715531 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_2f65103c-5a43-475f-b615-5d87535447e1! I0403 06:31:41.715445 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1699ecec-e503-420c-a7a8-65c159e2436d", APIVersion:"v1", ResourceVersion:"549", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_2f65103c-5a43-475f-b615-5d87535447e1 became leader I0403 06:31:41.817062 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_2f65103c-5a43-475f-b615-5d87535447e1!