==> Audit <== |------------|--------------------------------|----------|-------|---------|----------------------|----------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |------------|--------------------------------|----------|-------|---------|----------------------|----------------------| | docker-env | | minikube | cirix | v1.32.0 | 18 Apr 24 19:55 CEST | | | start | | minikube | cirix | v1.32.0 | 18 Apr 24 19:59 CEST | 18 Apr 24 20:00 CEST | | stop | | minikube | cirix | v1.32.0 | 18 Apr 24 20:11 CEST | 18 Apr 24 20:11 CEST | | docker-env | | minikube | cirix | v1.32.0 | 19 Apr 24 10:41 CEST | | | start | | minikube | cirix | v1.32.0 | 19 Apr 24 12:05 CEST | 19 Apr 24 12:06 CEST | | stop | | minikube | cirix | v1.32.0 | 19 Apr 24 15:22 CEST | 19 Apr 24 15:22 CEST | | start | | minikube | cirix | v1.32.0 | 19 Apr 24 20:34 CEST | 19 Apr 24 20:35 CEST | | addons | enable metrics-server | minikube | cirix | v1.32.0 | 19 Apr 24 20:36 CEST | 19 Apr 24 20:36 CEST | | start | | minikube | cirix | v1.32.0 | 20 Apr 24 11:44 CEST | 20 Apr 24 11:45 CEST | | docker-env | | minikube | cirix | v1.33.0 | 20 Apr 24 11:58 CEST | 20 Apr 24 11:58 CEST | | docker-env | | minikube | cirix | v1.33.0 | 20 Apr 24 18:02 CEST | | | docker-env | | minikube | cirix | v1.33.0 | 21 Apr 24 10:12 CEST | | | start | | minikube | cirix | v1.33.0 | 21 Apr 24 10:40 CEST | 21 Apr 24 10:44 CEST | | stop | | minikube | cirix | v1.33.0 | 21 Apr 24 10:44 CEST | 21 Apr 24 10:44 CEST | | delete | minikube | minikube | cirix | v1.33.0 | 21 Apr 24 11:01 CEST | | | delete | -p minikube | minikube | cirix | v1.33.0 | 21 Apr 24 11:01 CEST | 21 Apr 24 11:01 CEST | | start | --cpus=4 --memory=8GB | minikube | cirix | v1.33.0 | 21 Apr 24 11:03 CEST | 21 Apr 24 11:06 CEST | | | --vm-driver=qemu2 | | | | | | | docker-env | | minikube | cirix | v1.33.0 | 21 Apr 24 11:37 CEST | 21 Apr 24 11:37 CEST | | addons | list | minikube | cirix | v1.33.0 | 21 Apr 24 11:38 CEST | 21 Apr 24 11:38 CEST | | addons | enable ingress | minikube | cirix | v1.33.0 | 21 Apr 24 11:38 CEST | | | stop | | minikube | cirix | v1.33.0 | 21 Apr 24 11:51 CEST | 21 Apr 24 11:51 CEST | | docker-env | | minikube | cirix | v1.33.0 | 21 Apr 24 11:51 CEST | | | docker-env | | minikube | cirix | v1.33.0 | 21 Apr 24 13:46 CEST | | | start | --cpus=4 --memory=8GB | minikube | cirix | v1.33.0 | 21 Apr 24 13:47 CEST | 21 Apr 24 13:47 CEST | | | --vm-driver=qemu2 | | | | | | | addons | status | minikube | cirix | v1.33.0 | 21 Apr 24 13:55 CEST | 21 Apr 24 13:55 CEST | | addons | list | minikube | cirix | v1.33.0 | 21 Apr 24 13:55 CEST | 21 Apr 24 13:55 CEST | | addons | disable ingress | minikube | cirix | v1.33.0 | 21 Apr 24 13:55 CEST | 21 Apr 24 13:55 CEST | | addons | enable ingress | minikube | cirix | v1.33.0 | 21 Apr 24 13:57 CEST | | | ssh | | minikube | cirix | v1.33.0 | 21 Apr 24 14:15 CEST | | | ip | | minikube | cirix | v1.33.0 | 21 Apr 24 14:16 CEST | 21 Apr 24 14:16 CEST | | ip | | minikube | cirix | v1.33.0 | 21 Apr 24 14:17 CEST | 21 Apr 24 14:17 CEST | | stop | | minikube | cirix | v1.33.0 | 21 Apr 24 14:17 CEST | 21 Apr 24 14:18 CEST | | start | | minikube | cirix | v1.33.0 | 21 Apr 24 14:18 CEST | 21 Apr 24 14:18 CEST | | addons | list | minikube | cirix | v1.33.0 | 21 Apr 24 14:22 CEST | 21 Apr 24 14:22 CEST | | addons | disable ingress | minikube | cirix | v1.33.0 | 21 Apr 24 14:26 CEST | 21 Apr 24 14:26 CEST | | addons | status | minikube | cirix | v1.33.0 | 21 Apr 24 14:26 CEST | 21 Apr 24 14:26 CEST | | addons | list | minikube | cirix | v1.33.0 | 21 Apr 24 14:26 CEST | 21 Apr 24 14:26 CEST | | addons | enable olm | minikube | cirix | v1.33.0 | 21 Apr 24 14:30 CEST | 21 Apr 24 14:30 CEST | | addons | disable olm | minikube | cirix | v1.33.0 | 21 Apr 24 14:30 CEST | 21 Apr 24 14:30 CEST | | addons | enable ingree | minikube | cirix | v1.33.0 | 21 Apr 24 14:31 CEST | | | addons | disable ingess | minikube | cirix | v1.33.0 | 21 Apr 24 14:31 CEST | | | addons | disabel ingerss | minikube | cirix | v1.33.0 | 21 Apr 24 14:31 CEST | 21 Apr 24 14:31 CEST | | addons | disable ingerss | minikube | cirix | v1.33.0 | 21 Apr 24 14:31 CEST | | | addons | list | minikube | cirix | v1.33.0 | 21 Apr 24 14:31 CEST | 21 Apr 24 14:31 CEST | | addons | disable ingress | minikube | cirix | v1.33.0 | 21 Apr 24 14:32 CEST | 21 Apr 24 14:32 CEST | | ssh | | minikube | cirix | v1.33.0 | 21 Apr 24 14:32 CEST | | | ssh | | minikube | cirix | v1.33.0 | 21 Apr 24 14:49 CEST | | | stop | | minikube | cirix | v1.33.0 | 21 Apr 24 15:04 CEST | 21 Apr 24 15:04 CEST | | docker-env | | minikube | cirix | v1.33.0 | 21 Apr 24 18:14 CEST | | | start | | minikube | cirix | v1.33.0 | 21 Apr 24 22:01 CEST | 21 Apr 24 22:01 CEST | | stop | | minikube | cirix | v1.33.0 | 21 Apr 24 22:06 CEST | 21 Apr 24 22:06 CEST | | delete | -p minikube | minikube | cirix | v1.33.0 | 21 Apr 24 22:06 CEST | 21 Apr 24 22:06 CEST | | start | --cpus=6 --memory=12GB | minikube | cirix | v1.33.0 | 21 Apr 24 22:08 CEST | 21 Apr 24 22:09 CEST | | | --vm-driver=qemu2 | | | | | | | ssh | | minikube | cirix | v1.33.0 | 21 Apr 24 22:23 CEST | 21 Apr 24 22:24 CEST | | docker-env | | minikube | cirix | v1.33.0 | 22 Apr 24 07:50 CEST | | | docker-env | | minikube | cirix | v1.33.0 | 22 Apr 24 21:24 CEST | | | delete | --all | minikube | cirix | v1.33.0 | 22 Apr 24 21:25 CEST | 22 Apr 24 21:25 CEST | | start | --cpus=6 --memory=12GB | minikube | cirix | v1.33.0 | 22 Apr 24 21:25 CEST | | | | --vm-driver=qemu2 | | | | | | | delete | --all | minikube | cirix | v1.33.0 | 22 Apr 24 21:26 CEST | 22 Apr 24 21:26 CEST | | start | --cpus=6 --memory=12GB | minikube | cirix | v1.33.0 | 22 Apr 24 21:29 CEST | 22 Apr 24 21:30 CEST | | | --vm-driver=qemu2 | | | | | | |------------|--------------------------------|----------|-------|---------|----------------------|----------------------| ==> Last Start <== Log file created at: 2024/04/22 21:29:32 Running on machine: Nikolaoss-MacBook-Pro Binary: Built with gc go1.22.2 for darwin/arm64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0422 21:29:32.467662 1839 out.go:291] Setting OutFile to fd 1 ... I0422 21:29:32.468074 1839 out.go:343] isatty.IsTerminal(1) = true I0422 21:29:32.468077 1839 out.go:304] Setting ErrFile to fd 2... I0422 21:29:32.468081 1839 out.go:343] isatty.IsTerminal(2) = true I0422 21:29:32.468313 1839 root.go:338] Updating PATH: /Users/cirix/.minikube/bin I0422 21:29:32.469088 1839 out.go:298] Setting JSON to false I0422 21:29:32.495115 1839 start.go:129] hostinfo: {"hostname":"Nikolaoss-MacBook-Pro.local","uptime":5808,"bootTime":1713808364,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"317da503-9504-51d1-9504-379a1e99c556"} W0422 21:29:32.495214 1839 start.go:137] gopshost.Virtualization returned error: not implemented yet I0422 21:29:32.500860 1839 out.go:177] 😄 minikube v1.33.0 on Darwin 14.4.1 (arm64) I0422 21:29:32.508881 1839 notify.go:220] Checking for updates... I0422 21:29:32.509096 1839 driver.go:392] Setting default libvirt URI to qemu:///system I0422 21:29:32.513780 1839 out.go:177] ✨ Using the qemu2 driver based on user configuration I0422 21:29:32.521797 1839 start.go:297] selected driver: qemu2 I0422 21:29:32.521801 1839 start.go:901] validating driver "qemu2" against I0422 21:29:32.521809 1839 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0422 21:29:32.521924 1839 start_flags.go:310] no existing cluster config was found, will generate one from the flags I0422 21:29:32.525872 1839 out.go:177] 🌐 Automatically selected the socket_vmnet network I0422 21:29:32.529808 1839 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true] I0422 21:29:32.529851 1839 cni.go:84] Creating CNI manager for "" I0422 21:29:32.529860 1839 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0422 21:29:32.529863 1839 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0422 21:29:32.529898 1839 start.go:340] cluster config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:12288 CPUs:6 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0422 21:29:32.535232 1839 iso.go:125] acquiring lock: {Name:mkc3b12b9e252924ab0e9a128e09ee994f40743f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0422 21:29:32.543707 1839 out.go:177] 👍 Starting "minikube" primary control-plane node in "minikube" cluster I0422 21:29:32.547746 1839 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0422 21:29:32.547767 1839 preload.go:147] Found local preload: /Users/cirix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 I0422 21:29:32.547774 1839 cache.go:56] Caching tarball of preloaded images I0422 21:29:32.547866 1839 preload.go:173] Found /Users/cirix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download I0422 21:29:32.547870 1839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker I0422 21:29:32.548173 1839 profile.go:143] Saving config to /Users/cirix/.minikube/profiles/minikube/config.json ... I0422 21:29:32.548195 1839 lock.go:35] WriteFile acquiring /Users/cirix/.minikube/profiles/minikube/config.json: {Name:mke19603d6c59d8ef5b120dbeeb89abad4e21a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0422 21:29:32.548451 1839 start.go:360] acquireMachinesLock for minikube: {Name:mkb835d30c707b601c2844cf34d200913fa46b8d Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0422 21:29:32.548487 1839 start.go:364] duration metric: took 31.5µs to acquireMachinesLock for "minikube" I0422 21:29:32.548515 1839 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:12288 CPUs:6 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0422 21:29:32.548542 1839 start.go:125] createHost starting for "" (driver="qemu2") I0422 21:29:32.556711 1839 out.go:204] 🔥 Creating qemu2 VM (CPUs=6, Memory=12288MB, Disk=20000MB) ... I0422 21:29:32.578247 1839 start.go:159] libmachine.API.Create for "minikube" (driver="qemu2") I0422 21:29:32.578302 1839 client.go:168] LocalClient.Create starting I0422 21:29:32.578453 1839 main.go:141] libmachine: Reading certificate data from /Users/cirix/.minikube/certs/ca.pem I0422 21:29:32.578516 1839 main.go:141] libmachine: Decoding PEM data... I0422 21:29:32.578535 1839 main.go:141] libmachine: Parsing certificate... I0422 21:29:32.578611 1839 main.go:141] libmachine: Reading certificate data from /Users/cirix/.minikube/certs/cert.pem I0422 21:29:32.578644 1839 main.go:141] libmachine: Decoding PEM data... I0422 21:29:32.578651 1839 main.go:141] libmachine: Parsing certificate... I0422 21:29:32.579241 1839 main.go:141] libmachine: Downloading /Users/cirix/.minikube/cache/boot2docker.iso from file:///Users/cirix/.minikube/cache/iso/arm64/minikube-v1.33.0-arm64.iso... I0422 21:29:32.756193 1839 main.go:141] libmachine: Creating SSH key... I0422 21:29:32.845586 1839 main.go:141] libmachine: Creating Disk image... I0422 21:29:32.845592 1839 main.go:141] libmachine: Creating 20000 MB hard disk image... I0422 21:29:32.845860 1839 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/cirix/.minikube/machines/minikube/disk.qcow2.raw /Users/cirix/.minikube/machines/minikube/disk.qcow2 I0422 21:29:32.864899 1839 main.go:141] libmachine: STDOUT: I0422 21:29:32.864918 1839 main.go:141] libmachine: STDERR: I0422 21:29:32.864972 1839 main.go:141] libmachine: executing: qemu-img resize /Users/cirix/.minikube/machines/minikube/disk.qcow2 +20000M I0422 21:29:32.880609 1839 main.go:141] libmachine: STDOUT: Image resized. I0422 21:29:32.880622 1839 main.go:141] libmachine: STDERR: I0422 21:29:32.880632 1839 main.go:141] libmachine: DONE writing to /Users/cirix/.minikube/machines/minikube/disk.qcow2.raw and /Users/cirix/.minikube/machines/minikube/disk.qcow2 I0422 21:29:32.880635 1839 main.go:141] libmachine: Starting QEMU VM... I0422 21:29:32.880677 1839 main.go:141] libmachine: executing: /opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client /opt/homebrew/var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 12288 -smp 6 -boot d -cdrom /Users/cirix/.minikube/machines/minikube/boot2docker.iso -qmp unix:/Users/cirix/.minikube/machines/minikube/monitor,server,nowait -pidfile /Users/cirix/.minikube/machines/minikube/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:41:d7:43:4b:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/cirix/.minikube/machines/minikube/disk.qcow2 I0422 21:29:32.934905 1839 main.go:141] libmachine: STDOUT: I0422 21:29:32.934928 1839 main.go:141] libmachine: STDERR: I0422 21:29:32.934930 1839 main.go:141] libmachine: Attempt 0 I0422 21:29:32.934947 1839 main.go:141] libmachine: Searching for 2a:41:d7:43:4b:c1 in /var/db/dhcpd_leases ... I0422 21:29:32.935104 1839 main.go:141] libmachine: Found 20 entries in /var/db/dhcpd_leases! I0422 21:29:32.935128 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.21 HWAddress:5a:6e:2e:89:27:16 ID:1,5a:6e:2e:89:27:16 Lease:0x66280b51} I0422 21:29:32.935135 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.20 HWAddress:16:55:1f:ab:5d:67 ID:1,16:55:1f:ab:5d:67 Lease:0x6626c3d1} I0422 21:29:32.935140 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.19 HWAddress:f6:2d:51:2c:c5:a6 ID:1,f6:2d:51:2c:c5:a6 Lease:0x662571b6} I0422 21:29:32.935146 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.18 HWAddress:b2:57:86:a3:b9:15 ID:1,b2:57:86:a3:b9:15 Lease:0x6624d209} I0422 21:29:32.935162 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.17 HWAddress:2a:a1:32:6d:77:d2 ID:1,2a:a1:32:6d:77:d2 Lease:0x65a3cd82} I0422 21:29:32.935168 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.16 HWAddress:86:dc:41:fa:97:f2 ID:1,86:dc:41:fa:97:f2 Lease:0x65a3c802} I0422 21:29:32.935173 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:96:3a:40:d:dc:ef ID:1,96:3a:40:d:dc:ef Lease:0x65a3c9f9} I0422 21:29:32.935179 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:da:3:a:12:85:12 ID:1,da:3:a:12:85:12 Lease:0x659f0a80} I0422 21:29:32.935188 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:da:a8:74:ed:6d:dd ID:1,da:a8:74:ed:6d:dd Lease:0x659f06e0} I0422 21:29:32.935193 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:1e:31:65:9a:a8:d5 ID:1,1e:31:65:9a:a8:d5 Lease:0x658d612c} I0422 21:29:32.935199 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:f2:bc:ea:c1:e9:c7 ID:1,f2:bc:ea:c1:e9:c7 Lease:0x6589e2bf} I0422 21:29:32.935205 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:1a:27:55:6d:40:46 ID:1,1a:27:55:6d:40:46 Lease:0x657eb604} I0422 21:29:32.935210 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:c2:7c:ea:c3:72:c4 ID:1,c2:7c:ea:c3:72:c4 Lease:0x6577ffdf} I0422 21:29:32.935215 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:7a:16:4:3:33:95 ID:1,7a:16:4:3:33:95 Lease:0x65647291} I0422 21:29:32.935221 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:c6:1d:24:dc:e6:c7 ID:1,c6:1d:24:dc:e6:c7 Lease:0x6562596c} I0422 21:29:32.935233 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:c6:ea:b6:2c:ef:eb ID:1,c6:ea:b6:2c:ef:eb Lease:0x656107ca} I0422 21:29:32.935239 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:32:ee:7a:88:a9:b5 ID:1,32:ee:7a:88:a9:b5 Lease:0x655dfe46} I0422 21:29:32.935245 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:92:4:47:a0:a4:22 ID:1,92:4:47:a0:a4:22 Lease:0x655b2826} I0422 21:29:32.935251 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:a8:b6:8a:d7:48 ID:1,be:a8:b6:8a:d7:48 Lease:0x655a5a03} I0422 21:29:32.935260 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:31:50:ad:91:25 ID:1,36:31:50:ad:91:25 Lease:0x6557aee1} I0422 21:29:34.936432 1839 main.go:141] libmachine: Attempt 1 I0422 21:29:34.936558 1839 main.go:141] libmachine: Searching for 2a:41:d7:43:4b:c1 in /var/db/dhcpd_leases ... I0422 21:29:34.937129 1839 main.go:141] libmachine: Found 20 entries in /var/db/dhcpd_leases! I0422 21:29:34.937174 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.21 HWAddress:5a:6e:2e:89:27:16 ID:1,5a:6e:2e:89:27:16 Lease:0x66280b51} I0422 21:29:34.937200 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.20 HWAddress:16:55:1f:ab:5d:67 ID:1,16:55:1f:ab:5d:67 Lease:0x6626c3d1} I0422 21:29:34.937223 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.19 HWAddress:f6:2d:51:2c:c5:a6 ID:1,f6:2d:51:2c:c5:a6 Lease:0x662571b6} I0422 21:29:34.937247 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.18 HWAddress:b2:57:86:a3:b9:15 ID:1,b2:57:86:a3:b9:15 Lease:0x6624d209} I0422 21:29:34.937268 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.17 HWAddress:2a:a1:32:6d:77:d2 ID:1,2a:a1:32:6d:77:d2 Lease:0x65a3cd82} I0422 21:29:34.937290 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.16 HWAddress:86:dc:41:fa:97:f2 ID:1,86:dc:41:fa:97:f2 Lease:0x65a3c802} I0422 21:29:34.937311 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:96:3a:40:d:dc:ef ID:1,96:3a:40:d:dc:ef Lease:0x65a3c9f9} I0422 21:29:34.937330 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:da:3:a:12:85:12 ID:1,da:3:a:12:85:12 Lease:0x659f0a80} I0422 21:29:34.937352 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:da:a8:74:ed:6d:dd ID:1,da:a8:74:ed:6d:dd Lease:0x659f06e0} I0422 21:29:34.937372 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:1e:31:65:9a:a8:d5 ID:1,1e:31:65:9a:a8:d5 Lease:0x658d612c} I0422 21:29:34.937392 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:f2:bc:ea:c1:e9:c7 ID:1,f2:bc:ea:c1:e9:c7 Lease:0x6589e2bf} I0422 21:29:34.937411 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:1a:27:55:6d:40:46 ID:1,1a:27:55:6d:40:46 Lease:0x657eb604} I0422 21:29:34.937432 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:c2:7c:ea:c3:72:c4 ID:1,c2:7c:ea:c3:72:c4 Lease:0x6577ffdf} I0422 21:29:34.937451 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:7a:16:4:3:33:95 ID:1,7a:16:4:3:33:95 Lease:0x65647291} I0422 21:29:34.937471 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:c6:1d:24:dc:e6:c7 ID:1,c6:1d:24:dc:e6:c7 Lease:0x6562596c} I0422 21:29:34.937490 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:c6:ea:b6:2c:ef:eb ID:1,c6:ea:b6:2c:ef:eb Lease:0x656107ca} I0422 21:29:34.937510 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:32:ee:7a:88:a9:b5 ID:1,32:ee:7a:88:a9:b5 Lease:0x655dfe46} I0422 21:29:34.937528 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:92:4:47:a0:a4:22 ID:1,92:4:47:a0:a4:22 Lease:0x655b2826} I0422 21:29:34.937548 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:a8:b6:8a:d7:48 ID:1,be:a8:b6:8a:d7:48 Lease:0x655a5a03} I0422 21:29:34.937578 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:31:50:ad:91:25 ID:1,36:31:50:ad:91:25 Lease:0x6557aee1} I0422 21:29:36.938694 1839 main.go:141] libmachine: Attempt 2 I0422 21:29:36.938719 1839 main.go:141] libmachine: Searching for 2a:41:d7:43:4b:c1 in /var/db/dhcpd_leases ... I0422 21:29:36.939167 1839 main.go:141] libmachine: Found 20 entries in /var/db/dhcpd_leases! I0422 21:29:36.939192 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.21 HWAddress:5a:6e:2e:89:27:16 ID:1,5a:6e:2e:89:27:16 Lease:0x66280b51} I0422 21:29:36.939216 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.20 HWAddress:16:55:1f:ab:5d:67 ID:1,16:55:1f:ab:5d:67 Lease:0x6626c3d1} I0422 21:29:36.939229 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.19 HWAddress:f6:2d:51:2c:c5:a6 ID:1,f6:2d:51:2c:c5:a6 Lease:0x662571b6} I0422 21:29:36.939241 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.18 HWAddress:b2:57:86:a3:b9:15 ID:1,b2:57:86:a3:b9:15 Lease:0x6624d209} I0422 21:29:36.939259 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.17 HWAddress:2a:a1:32:6d:77:d2 ID:1,2a:a1:32:6d:77:d2 Lease:0x65a3cd82} I0422 21:29:36.939272 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.16 HWAddress:86:dc:41:fa:97:f2 ID:1,86:dc:41:fa:97:f2 Lease:0x65a3c802} I0422 21:29:36.939284 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:96:3a:40:d:dc:ef ID:1,96:3a:40:d:dc:ef Lease:0x65a3c9f9} I0422 21:29:36.939296 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:da:3:a:12:85:12 ID:1,da:3:a:12:85:12 Lease:0x659f0a80} I0422 21:29:36.939308 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:da:a8:74:ed:6d:dd ID:1,da:a8:74:ed:6d:dd Lease:0x659f06e0} I0422 21:29:36.939320 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:1e:31:65:9a:a8:d5 ID:1,1e:31:65:9a:a8:d5 Lease:0x658d612c} I0422 21:29:36.939333 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:f2:bc:ea:c1:e9:c7 ID:1,f2:bc:ea:c1:e9:c7 Lease:0x6589e2bf} I0422 21:29:36.939344 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:1a:27:55:6d:40:46 ID:1,1a:27:55:6d:40:46 Lease:0x657eb604} I0422 21:29:36.939356 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:c2:7c:ea:c3:72:c4 ID:1,c2:7c:ea:c3:72:c4 Lease:0x6577ffdf} I0422 21:29:36.939368 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:7a:16:4:3:33:95 ID:1,7a:16:4:3:33:95 Lease:0x65647291} I0422 21:29:36.939380 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:c6:1d:24:dc:e6:c7 ID:1,c6:1d:24:dc:e6:c7 Lease:0x6562596c} I0422 21:29:36.939409 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:c6:ea:b6:2c:ef:eb ID:1,c6:ea:b6:2c:ef:eb Lease:0x656107ca} I0422 21:29:36.939421 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:32:ee:7a:88:a9:b5 ID:1,32:ee:7a:88:a9:b5 Lease:0x655dfe46} I0422 21:29:36.939432 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:92:4:47:a0:a4:22 ID:1,92:4:47:a0:a4:22 Lease:0x655b2826} I0422 21:29:36.939443 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:a8:b6:8a:d7:48 ID:1,be:a8:b6:8a:d7:48 Lease:0x655a5a03} I0422 21:29:36.939453 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:31:50:ad:91:25 ID:1,36:31:50:ad:91:25 Lease:0x6557aee1} I0422 21:29:38.940518 1839 main.go:141] libmachine: Attempt 3 I0422 21:29:38.940530 1839 main.go:141] libmachine: Searching for 2a:41:d7:43:4b:c1 in /var/db/dhcpd_leases ... I0422 21:29:38.940749 1839 main.go:141] libmachine: Found 20 entries in /var/db/dhcpd_leases! I0422 21:29:38.940761 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.21 HWAddress:5a:6e:2e:89:27:16 ID:1,5a:6e:2e:89:27:16 Lease:0x66280b51} I0422 21:29:38.940765 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.20 HWAddress:16:55:1f:ab:5d:67 ID:1,16:55:1f:ab:5d:67 Lease:0x6626c3d1} I0422 21:29:38.940770 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.19 HWAddress:f6:2d:51:2c:c5:a6 ID:1,f6:2d:51:2c:c5:a6 Lease:0x662571b6} I0422 21:29:38.940774 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.18 HWAddress:b2:57:86:a3:b9:15 ID:1,b2:57:86:a3:b9:15 Lease:0x6624d209} I0422 21:29:38.940778 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.17 HWAddress:2a:a1:32:6d:77:d2 ID:1,2a:a1:32:6d:77:d2 Lease:0x65a3cd82} I0422 21:29:38.940788 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.16 HWAddress:86:dc:41:fa:97:f2 ID:1,86:dc:41:fa:97:f2 Lease:0x65a3c802} I0422 21:29:38.940792 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:96:3a:40:d:dc:ef ID:1,96:3a:40:d:dc:ef Lease:0x65a3c9f9} I0422 21:29:38.940796 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:da:3:a:12:85:12 ID:1,da:3:a:12:85:12 Lease:0x659f0a80} I0422 21:29:38.940801 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:da:a8:74:ed:6d:dd ID:1,da:a8:74:ed:6d:dd Lease:0x659f06e0} I0422 21:29:38.940807 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:1e:31:65:9a:a8:d5 ID:1,1e:31:65:9a:a8:d5 Lease:0x658d612c} I0422 21:29:38.940811 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:f2:bc:ea:c1:e9:c7 ID:1,f2:bc:ea:c1:e9:c7 Lease:0x6589e2bf} I0422 21:29:38.940815 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:1a:27:55:6d:40:46 ID:1,1a:27:55:6d:40:46 Lease:0x657eb604} I0422 21:29:38.940819 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:c2:7c:ea:c3:72:c4 ID:1,c2:7c:ea:c3:72:c4 Lease:0x6577ffdf} I0422 21:29:38.940824 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:7a:16:4:3:33:95 ID:1,7a:16:4:3:33:95 Lease:0x65647291} I0422 21:29:38.940828 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:c6:1d:24:dc:e6:c7 ID:1,c6:1d:24:dc:e6:c7 Lease:0x6562596c} I0422 21:29:38.940832 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:c6:ea:b6:2c:ef:eb ID:1,c6:ea:b6:2c:ef:eb Lease:0x656107ca} I0422 21:29:38.940836 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:32:ee:7a:88:a9:b5 ID:1,32:ee:7a:88:a9:b5 Lease:0x655dfe46} I0422 21:29:38.940840 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:92:4:47:a0:a4:22 ID:1,92:4:47:a0:a4:22 Lease:0x655b2826} I0422 21:29:38.940844 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:a8:b6:8a:d7:48 ID:1,be:a8:b6:8a:d7:48 Lease:0x655a5a03} I0422 21:29:38.940848 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:31:50:ad:91:25 ID:1,36:31:50:ad:91:25 Lease:0x6557aee1} I0422 21:29:40.941888 1839 main.go:141] libmachine: Attempt 4 I0422 21:29:40.941895 1839 main.go:141] libmachine: Searching for 2a:41:d7:43:4b:c1 in /var/db/dhcpd_leases ... I0422 21:29:40.941976 1839 main.go:141] libmachine: Found 20 entries in /var/db/dhcpd_leases! I0422 21:29:40.941984 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.21 HWAddress:5a:6e:2e:89:27:16 ID:1,5a:6e:2e:89:27:16 Lease:0x66280b51} I0422 21:29:40.941999 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.20 HWAddress:16:55:1f:ab:5d:67 ID:1,16:55:1f:ab:5d:67 Lease:0x6626c3d1} I0422 21:29:40.942004 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.19 HWAddress:f6:2d:51:2c:c5:a6 ID:1,f6:2d:51:2c:c5:a6 Lease:0x662571b6} I0422 21:29:40.942009 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.18 HWAddress:b2:57:86:a3:b9:15 ID:1,b2:57:86:a3:b9:15 Lease:0x6624d209} I0422 21:29:40.942013 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.17 HWAddress:2a:a1:32:6d:77:d2 ID:1,2a:a1:32:6d:77:d2 Lease:0x65a3cd82} I0422 21:29:40.942017 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.16 HWAddress:86:dc:41:fa:97:f2 ID:1,86:dc:41:fa:97:f2 Lease:0x65a3c802} I0422 21:29:40.942021 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:96:3a:40:d:dc:ef ID:1,96:3a:40:d:dc:ef Lease:0x65a3c9f9} I0422 21:29:40.942025 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:da:3:a:12:85:12 ID:1,da:3:a:12:85:12 Lease:0x659f0a80} I0422 21:29:40.942030 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:da:a8:74:ed:6d:dd ID:1,da:a8:74:ed:6d:dd Lease:0x659f06e0} I0422 21:29:40.942034 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:1e:31:65:9a:a8:d5 ID:1,1e:31:65:9a:a8:d5 Lease:0x658d612c} I0422 21:29:40.942038 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:f2:bc:ea:c1:e9:c7 ID:1,f2:bc:ea:c1:e9:c7 Lease:0x6589e2bf} I0422 21:29:40.942042 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:1a:27:55:6d:40:46 ID:1,1a:27:55:6d:40:46 Lease:0x657eb604} I0422 21:29:40.942046 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:c2:7c:ea:c3:72:c4 ID:1,c2:7c:ea:c3:72:c4 Lease:0x6577ffdf} I0422 21:29:40.942050 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:7a:16:4:3:33:95 ID:1,7a:16:4:3:33:95 Lease:0x65647291} I0422 21:29:40.942054 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:c6:1d:24:dc:e6:c7 ID:1,c6:1d:24:dc:e6:c7 Lease:0x6562596c} I0422 21:29:40.942058 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:c6:ea:b6:2c:ef:eb ID:1,c6:ea:b6:2c:ef:eb Lease:0x656107ca} I0422 21:29:40.942062 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:32:ee:7a:88:a9:b5 ID:1,32:ee:7a:88:a9:b5 Lease:0x655dfe46} I0422 21:29:40.942067 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:92:4:47:a0:a4:22 ID:1,92:4:47:a0:a4:22 Lease:0x655b2826} I0422 21:29:40.942078 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:a8:b6:8a:d7:48 ID:1,be:a8:b6:8a:d7:48 Lease:0x655a5a03} I0422 21:29:40.942082 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:31:50:ad:91:25 ID:1,36:31:50:ad:91:25 Lease:0x6557aee1} I0422 21:29:42.943188 1839 main.go:141] libmachine: Attempt 5 I0422 21:29:42.943207 1839 main.go:141] libmachine: Searching for 2a:41:d7:43:4b:c1 in /var/db/dhcpd_leases ... I0422 21:29:42.943408 1839 main.go:141] libmachine: Found 20 entries in /var/db/dhcpd_leases! I0422 21:29:42.943423 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.21 HWAddress:5a:6e:2e:89:27:16 ID:1,5a:6e:2e:89:27:16 Lease:0x66280b51} I0422 21:29:42.943428 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.20 HWAddress:16:55:1f:ab:5d:67 ID:1,16:55:1f:ab:5d:67 Lease:0x6626c3d1} I0422 21:29:42.943432 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.19 HWAddress:f6:2d:51:2c:c5:a6 ID:1,f6:2d:51:2c:c5:a6 Lease:0x662571b6} I0422 21:29:42.943436 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.18 HWAddress:b2:57:86:a3:b9:15 ID:1,b2:57:86:a3:b9:15 Lease:0x6624d209} I0422 21:29:42.943447 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.17 HWAddress:2a:a1:32:6d:77:d2 ID:1,2a:a1:32:6d:77:d2 Lease:0x65a3cd82} I0422 21:29:42.943451 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.16 HWAddress:86:dc:41:fa:97:f2 ID:1,86:dc:41:fa:97:f2 Lease:0x65a3c802} I0422 21:29:42.943455 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:96:3a:40:d:dc:ef ID:1,96:3a:40:d:dc:ef Lease:0x65a3c9f9} I0422 21:29:42.943459 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:da:3:a:12:85:12 ID:1,da:3:a:12:85:12 Lease:0x659f0a80} I0422 21:29:42.943463 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:da:a8:74:ed:6d:dd ID:1,da:a8:74:ed:6d:dd Lease:0x659f06e0} I0422 21:29:42.943467 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:1e:31:65:9a:a8:d5 ID:1,1e:31:65:9a:a8:d5 Lease:0x658d612c} I0422 21:29:42.943472 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:f2:bc:ea:c1:e9:c7 ID:1,f2:bc:ea:c1:e9:c7 Lease:0x6589e2bf} I0422 21:29:42.943477 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:1a:27:55:6d:40:46 ID:1,1a:27:55:6d:40:46 Lease:0x657eb604} I0422 21:29:42.943481 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:c2:7c:ea:c3:72:c4 ID:1,c2:7c:ea:c3:72:c4 Lease:0x6577ffdf} I0422 21:29:42.943485 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:7a:16:4:3:33:95 ID:1,7a:16:4:3:33:95 Lease:0x65647291} I0422 21:29:42.943489 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:c6:1d:24:dc:e6:c7 ID:1,c6:1d:24:dc:e6:c7 Lease:0x6562596c} I0422 21:29:42.943502 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:c6:ea:b6:2c:ef:eb ID:1,c6:ea:b6:2c:ef:eb Lease:0x656107ca} I0422 21:29:42.943507 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:32:ee:7a:88:a9:b5 ID:1,32:ee:7a:88:a9:b5 Lease:0x655dfe46} I0422 21:29:42.943511 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:92:4:47:a0:a4:22 ID:1,92:4:47:a0:a4:22 Lease:0x655b2826} I0422 21:29:42.943518 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:a8:b6:8a:d7:48 ID:1,be:a8:b6:8a:d7:48 Lease:0x655a5a03} I0422 21:29:42.943523 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:31:50:ad:91:25 ID:1,36:31:50:ad:91:25 Lease:0x6557aee1} I0422 21:29:44.944613 1839 main.go:141] libmachine: Attempt 6 I0422 21:29:44.944632 1839 main.go:141] libmachine: Searching for 2a:41:d7:43:4b:c1 in /var/db/dhcpd_leases ... I0422 21:29:44.944935 1839 main.go:141] libmachine: Found 20 entries in /var/db/dhcpd_leases! I0422 21:29:44.944953 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.21 HWAddress:5a:6e:2e:89:27:16 ID:1,5a:6e:2e:89:27:16 Lease:0x66280b51} I0422 21:29:44.944958 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.20 HWAddress:16:55:1f:ab:5d:67 ID:1,16:55:1f:ab:5d:67 Lease:0x6626c3d1} I0422 21:29:44.944963 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.19 HWAddress:f6:2d:51:2c:c5:a6 ID:1,f6:2d:51:2c:c5:a6 Lease:0x662571b6} I0422 21:29:44.944967 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.18 HWAddress:b2:57:86:a3:b9:15 ID:1,b2:57:86:a3:b9:15 Lease:0x6624d209} I0422 21:29:44.944972 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.17 HWAddress:2a:a1:32:6d:77:d2 ID:1,2a:a1:32:6d:77:d2 Lease:0x65a3cd82} I0422 21:29:44.944987 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.16 HWAddress:86:dc:41:fa:97:f2 ID:1,86:dc:41:fa:97:f2 Lease:0x65a3c802} I0422 21:29:44.944991 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:96:3a:40:d:dc:ef ID:1,96:3a:40:d:dc:ef Lease:0x65a3c9f9} I0422 21:29:44.944995 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:da:3:a:12:85:12 ID:1,da:3:a:12:85:12 Lease:0x659f0a80} I0422 21:29:44.945000 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:da:a8:74:ed:6d:dd ID:1,da:a8:74:ed:6d:dd Lease:0x659f06e0} I0422 21:29:44.945004 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:1e:31:65:9a:a8:d5 ID:1,1e:31:65:9a:a8:d5 Lease:0x658d612c} I0422 21:29:44.945008 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:f2:bc:ea:c1:e9:c7 ID:1,f2:bc:ea:c1:e9:c7 Lease:0x6589e2bf} I0422 21:29:44.945012 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:1a:27:55:6d:40:46 ID:1,1a:27:55:6d:40:46 Lease:0x657eb604} I0422 21:29:44.945016 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:c2:7c:ea:c3:72:c4 ID:1,c2:7c:ea:c3:72:c4 Lease:0x6577ffdf} I0422 21:29:44.945020 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:7a:16:4:3:33:95 ID:1,7a:16:4:3:33:95 Lease:0x65647291} I0422 21:29:44.945038 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:c6:1d:24:dc:e6:c7 ID:1,c6:1d:24:dc:e6:c7 Lease:0x6562596c} I0422 21:29:44.945042 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:c6:ea:b6:2c:ef:eb ID:1,c6:ea:b6:2c:ef:eb Lease:0x656107ca} I0422 21:29:44.945046 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:32:ee:7a:88:a9:b5 ID:1,32:ee:7a:88:a9:b5 Lease:0x655dfe46} I0422 21:29:44.945050 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:92:4:47:a0:a4:22 ID:1,92:4:47:a0:a4:22 Lease:0x655b2826} I0422 21:29:44.945054 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:a8:b6:8a:d7:48 ID:1,be:a8:b6:8a:d7:48 Lease:0x655a5a03} I0422 21:29:44.945058 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:31:50:ad:91:25 ID:1,36:31:50:ad:91:25 Lease:0x6557aee1} I0422 21:29:46.945724 1839 main.go:141] libmachine: Attempt 7 I0422 21:29:46.945755 1839 main.go:141] libmachine: Searching for 2a:41:d7:43:4b:c1 in /var/db/dhcpd_leases ... I0422 21:29:46.946079 1839 main.go:141] libmachine: Found 21 entries in /var/db/dhcpd_leases! I0422 21:29:46.946113 1839 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.22 HWAddress:2a:41:d7:43:4b:c1 ID:1,2a:41:d7:43:4b:c1 Lease:0x66280c2a} I0422 21:29:46.946121 1839 main.go:141] libmachine: Found match: 2a:41:d7:43:4b:c1 I0422 21:29:46.946148 1839 main.go:141] libmachine: IP: 192.168.105.22 I0422 21:29:46.946156 1839 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.22)... I0422 21:29:49.955671 1839 machine.go:94] provisionDockerMachine start ... I0422 21:29:49.955796 1839 main.go:141] libmachine: Using SSH client type: native I0422 21:29:49.956007 1839 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1051bf280] 0x1051c1ae0 [] 0s} 192.168.105.22 22 } I0422 21:29:49.956012 1839 main.go:141] libmachine: About to run SSH command: hostname I0422 21:29:50.017655 1839 main.go:141] libmachine: SSH cmd err, output: : minikube I0422 21:29:50.017664 1839 buildroot.go:166] provisioning hostname "minikube" I0422 21:29:50.017734 1839 main.go:141] libmachine: Using SSH client type: native I0422 21:29:50.017859 1839 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1051bf280] 0x1051c1ae0 [] 0s} 192.168.105.22 22 } I0422 21:29:50.017863 1839 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0422 21:29:50.084157 1839 main.go:141] libmachine: SSH cmd err, output: : minikube I0422 21:29:50.084226 1839 main.go:141] libmachine: Using SSH client type: native I0422 21:29:50.084348 1839 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1051bf280] 0x1051c1ae0 [] 0s} 192.168.105.22 22 } I0422 21:29:50.084355 1839 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0422 21:29:50.143661 1839 main.go:141] libmachine: SSH cmd err, output: : I0422 21:29:50.143668 1839 buildroot.go:172] set auth options {CertDir:/Users/cirix/.minikube CaCertPath:/Users/cirix/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/cirix/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/cirix/.minikube/machines/server.pem ServerKeyPath:/Users/cirix/.minikube/machines/server-key.pem ClientKeyPath:/Users/cirix/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/cirix/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/cirix/.minikube} I0422 21:29:50.143675 1839 buildroot.go:174] setting up certificates I0422 21:29:50.143679 1839 provision.go:84] configureAuth start I0422 21:29:50.143685 1839 provision.go:143] copyHostCerts I0422 21:29:50.143737 1839 exec_runner.go:144] found /Users/cirix/.minikube/ca.pem, removing ... I0422 21:29:50.143741 1839 exec_runner.go:203] rm: /Users/cirix/.minikube/ca.pem I0422 21:29:50.143922 1839 exec_runner.go:151] cp: /Users/cirix/.minikube/certs/ca.pem --> /Users/cirix/.minikube/ca.pem (1074 bytes) I0422 21:29:50.144167 1839 exec_runner.go:144] found /Users/cirix/.minikube/cert.pem, removing ... I0422 21:29:50.144169 1839 exec_runner.go:203] rm: /Users/cirix/.minikube/cert.pem I0422 21:29:50.144269 1839 exec_runner.go:151] cp: /Users/cirix/.minikube/certs/cert.pem --> /Users/cirix/.minikube/cert.pem (1119 bytes) I0422 21:29:50.144407 1839 exec_runner.go:144] found /Users/cirix/.minikube/key.pem, removing ... I0422 21:29:50.144408 1839 exec_runner.go:203] rm: /Users/cirix/.minikube/key.pem I0422 21:29:50.144469 1839 exec_runner.go:151] cp: /Users/cirix/.minikube/certs/key.pem --> /Users/cirix/.minikube/key.pem (1679 bytes) I0422 21:29:50.144582 1839 provision.go:117] generating server cert: /Users/cirix/.minikube/machines/server.pem ca-key=/Users/cirix/.minikube/certs/ca.pem private-key=/Users/cirix/.minikube/certs/ca-key.pem org=cirix.minikube san=[127.0.0.1 192.168.105.22 localhost minikube] I0422 21:29:50.284650 1839 provision.go:177] copyRemoteCerts I0422 21:29:50.284703 1839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0422 21:29:50.284710 1839 sshutil.go:53] new ssh client: &{IP:192.168.105.22 Port:22 SSHKeyPath:/Users/cirix/.minikube/machines/minikube/id_rsa Username:docker} I0422 21:29:50.318417 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes) I0422 21:29:50.331614 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/machines/server.pem --> /etc/docker/server.pem (1176 bytes) I0422 21:29:50.342726 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0422 21:29:50.354366 1839 provision.go:87] duration metric: took 210.683875ms to configureAuth I0422 21:29:50.354374 1839 buildroot.go:189] setting minikube options for container-runtime I0422 21:29:50.354469 1839 config.go:182] Loaded profile config "minikube": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0422 21:29:50.354521 1839 main.go:141] libmachine: Using SSH client type: native I0422 21:29:50.354604 1839 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1051bf280] 0x1051c1ae0 [] 0s} 192.168.105.22 22 } I0422 21:29:50.354607 1839 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0422 21:29:50.415164 1839 main.go:141] libmachine: SSH cmd err, output: : tmpfs I0422 21:29:50.415169 1839 buildroot.go:70] root file system type: tmpfs I0422 21:29:50.415231 1839 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ... I0422 21:29:50.415295 1839 main.go:141] libmachine: Using SSH client type: native I0422 21:29:50.415403 1839 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1051bf280] 0x1051c1ae0 [] 0s} 192.168.105.22 22 } I0422 21:29:50.415442 1839 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0422 21:29:50.481309 1839 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0422 21:29:50.481365 1839 main.go:141] libmachine: Using SSH client type: native I0422 21:29:50.481488 1839 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1051bf280] 0x1051c1ae0 [] 0s} 192.168.105.22 22 } I0422 21:29:50.481496 1839 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0422 21:29:51.905590 1839 main.go:141] libmachine: SSH cmd err, output: : diff: can't stat '/lib/systemd/system/docker.service': No such file or directory Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service. I0422 21:29:51.905599 1839 machine.go:97] duration metric: took 1.949927416s to provisionDockerMachine I0422 21:29:51.905606 1839 client.go:171] duration metric: took 19.327368333s to LocalClient.Create I0422 21:29:51.905625 1839 start.go:167] duration metric: took 19.327455125s to libmachine.API.Create "minikube" I0422 21:29:51.905629 1839 start.go:293] postStartSetup for "minikube" (driver="qemu2") I0422 21:29:51.905634 1839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0422 21:29:51.905754 1839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0422 21:29:51.905764 1839 sshutil.go:53] new ssh client: &{IP:192.168.105.22 Port:22 SSHKeyPath:/Users/cirix/.minikube/machines/minikube/id_rsa Username:docker} I0422 21:29:51.943552 1839 ssh_runner.go:195] Run: cat /etc/os-release I0422 21:29:51.946343 1839 info.go:137] Remote host: Buildroot 2023.02.9 I0422 21:29:51.946353 1839 filesync.go:126] Scanning /Users/cirix/.minikube/addons for local assets ... I0422 21:29:51.946464 1839 filesync.go:126] Scanning /Users/cirix/.minikube/files for local assets ... I0422 21:29:51.946502 1839 start.go:296] duration metric: took 40.87075ms for postStartSetup I0422 21:29:51.946941 1839 profile.go:143] Saving config to /Users/cirix/.minikube/profiles/minikube/config.json ... I0422 21:29:51.947177 1839 start.go:128] duration metric: took 19.398705375s to createHost I0422 21:29:51.947216 1839 main.go:141] libmachine: Using SSH client type: native I0422 21:29:51.947317 1839 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1051bf280] 0x1051c1ae0 [] 0s} 192.168.105.22 22 } I0422 21:29:51.947320 1839 main.go:141] libmachine: About to run SSH command: date +%!s(MISSING).%!N(MISSING) I0422 21:29:52.007844 1839 main.go:141] libmachine: SSH cmd err, output: : 1713814191.826279336 I0422 21:29:52.007852 1839 fix.go:216] guest clock: 1713814191.826279336 I0422 21:29:52.007857 1839 fix.go:229] Guest: 2024-04-22 21:29:51.826279336 +0200 CEST Remote: 2024-04-22 21:29:51.947178 +0200 CEST m=+19.516181292 (delta=-120.898664ms) I0422 21:29:52.007873 1839 fix.go:200] guest clock delta is within tolerance: -120.898664ms I0422 21:29:52.007876 1839 start.go:83] releasing machines lock for "minikube", held for 19.459459917s I0422 21:29:52.008290 1839 ssh_runner.go:195] Run: cat /version.json I0422 21:29:52.008297 1839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0422 21:29:52.008298 1839 sshutil.go:53] new ssh client: &{IP:192.168.105.22 Port:22 SSHKeyPath:/Users/cirix/.minikube/machines/minikube/id_rsa Username:docker} I0422 21:29:52.008316 1839 sshutil.go:53] new ssh client: &{IP:192.168.105.22 Port:22 SSHKeyPath:/Users/cirix/.minikube/machines/minikube/id_rsa Username:docker} W0422 21:29:52.137342 1839 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 6 stdout: stderr: curl: (6) Could not resolve host: registry.k8s.io W0422 21:29:52.137389 1839 out.go:239] ❗ This VM is having trouble accessing https://registry.k8s.io W0422 21:29:52.137407 1839 out.go:239] 💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0422 21:29:52.137449 1839 ssh_runner.go:195] Run: systemctl --version I0422 21:29:52.140486 1839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" W0422 21:29:52.142806 1839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found I0422 21:29:52.142861 1839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0422 21:29:52.150365 1839 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s) I0422 21:29:52.150370 1839 start.go:494] detecting cgroup driver to use... I0422 21:29:52.150435 1839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0422 21:29:52.159607 1839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0422 21:29:52.165008 1839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0422 21:29:52.169842 1839 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver... I0422 21:29:52.169889 1839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml" I0422 21:29:52.174976 1839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0422 21:29:52.180103 1839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0422 21:29:52.184771 1839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0422 21:29:52.189433 1839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0422 21:29:52.194515 1839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0422 21:29:52.199229 1839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml" I0422 21:29:52.204214 1839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml" I0422 21:29:52.208966 1839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0422 21:29:52.214444 1839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0422 21:29:52.218958 1839 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0422 21:29:52.306203 1839 ssh_runner.go:195] Run: sudo systemctl restart containerd I0422 21:29:52.315934 1839 start.go:494] detecting cgroup driver to use... I0422 21:29:52.316002 1839 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0422 21:29:52.325751 1839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0422 21:29:52.336828 1839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd I0422 21:29:52.347832 1839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0422 21:29:52.354891 1839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0422 21:29:52.362041 1839 ssh_runner.go:195] Run: sudo systemctl stop -f crio I0422 21:29:52.406190 1839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0422 21:29:52.413401 1839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0422 21:29:52.423250 1839 ssh_runner.go:195] Run: which cri-dockerd I0422 21:29:52.425181 1839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0422 21:29:52.429796 1839 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0422 21:29:52.438148 1839 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0422 21:29:52.526020 1839 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0422 21:29:52.629385 1839 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver... I0422 21:29:52.629504 1839 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes) I0422 21:29:52.637735 1839 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0422 21:29:52.738815 1839 ssh_runner.go:195] Run: sudo systemctl restart docker I0422 21:29:54.928585 1839 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.189764542s) I0422 21:29:54.928687 1839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket I0422 21:29:54.935801 1839 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket I0422 21:29:54.945256 1839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0422 21:29:54.951743 1839 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0422 21:29:55.035577 1839 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0422 21:29:55.123560 1839 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0422 21:29:55.206872 1839 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0422 21:29:55.216229 1839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0422 21:29:55.222787 1839 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0422 21:29:55.311778 1839 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service I0422 21:29:55.361237 1839 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock I0422 21:29:55.361321 1839 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0422 21:29:55.365017 1839 start.go:562] Will wait 60s for crictl version I0422 21:29:55.365056 1839 ssh_runner.go:195] Run: which crictl I0422 21:29:55.367135 1839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0422 21:29:55.390364 1839 start.go:578] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 26.0.1 RuntimeApiVersion: v1 I0422 21:29:55.390416 1839 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0422 21:29:55.404224 1839 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0422 21:29:55.420537 1839 out.go:204] 🐳 Preparing Kubernetes v1.30.0 on Docker 26.0.1 ... I0422 21:29:55.420665 1839 ssh_runner.go:195] Run: grep 192.168.105.1 host.minikube.internal$ /etc/hosts I0422 21:29:55.422506 1839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0422 21:29:55.428200 1839 kubeadm.go:877] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:12288 CPUs:6 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ... I0422 21:29:55.428255 1839 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0422 21:29:55.428296 1839 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0422 21:29:55.438025 1839 docker.go:685] Got preloaded images: I0422 21:29:55.438030 1839 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded I0422 21:29:55.438089 1839 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0422 21:29:55.443011 1839 ssh_runner.go:195] Run: which lz4 I0422 21:29:55.444994 1839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I0422 21:29:55.446683 1839 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout: stderr: stat: cannot statx '/preloaded.tar.lz4': No such file or directory I0422 21:29:55.446692 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335341169 bytes) I0422 21:29:56.779871 1839 docker.go:649] duration metric: took 1.33493125s to copy over tarball I0422 21:29:56.779943 1839 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4 I0422 21:29:58.053414 1839 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.273460167s) I0422 21:29:58.053428 1839 ssh_runner.go:146] rm: /preloaded.tar.lz4 I0422 21:29:58.074262 1839 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0422 21:29:58.079869 1839 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes) I0422 21:29:58.087821 1839 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0422 21:29:58.172965 1839 ssh_runner.go:195] Run: sudo systemctl restart docker I0422 21:30:00.740927 1839 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.567958291s) I0422 21:30:00.741033 1839 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0422 21:30:00.752566 1839 docker.go:685] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0422 21:30:00.752572 1839 cache_images.go:84] Images are preloaded, skipping loading I0422 21:30:00.752577 1839 kubeadm.go:928] updating node { 192.168.105.22 8443 v1.30.0 docker true true} ... I0422 21:30:00.752638 1839 kubeadm.go:940] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.22 [Install] config: {KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} I0422 21:30:00.752687 1839 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0422 21:30:00.767736 1839 cni.go:84] Creating CNI manager for "" I0422 21:30:00.767744 1839 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0422 21:30:00.767751 1839 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0422 21:30:00.767761 1839 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.22 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0422 21:30:00.767829 1839 kubeadm.go:187] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.105.22 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.105.22 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.105.22"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.30.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0422 21:30:00.767918 1839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0 I0422 21:30:00.772928 1839 binaries.go:44] Found k8s binaries, skipping transfer I0422 21:30:00.773006 1839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0422 21:30:00.777843 1839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes) I0422 21:30:00.786372 1839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0422 21:30:00.794401 1839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes) I0422 21:30:00.803489 1839 ssh_runner.go:195] Run: grep 192.168.105.22 control-plane.minikube.internal$ /etc/hosts I0422 21:30:00.805438 1839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.22 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0422 21:30:00.811045 1839 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0422 21:30:00.898629 1839 ssh_runner.go:195] Run: sudo systemctl start kubelet I0422 21:30:00.907446 1839 certs.go:68] Setting up /Users/cirix/.minikube/profiles/minikube for IP: 192.168.105.22 I0422 21:30:00.907450 1839 certs.go:194] generating shared ca certs ... I0422 21:30:00.907459 1839 certs.go:226] acquiring lock for ca certs: {Name:mkf4dc558d1d533a4e757e7fdb38146963105d75 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0422 21:30:00.907658 1839 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/cirix/.minikube/ca.key I0422 21:30:00.907729 1839 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/cirix/.minikube/proxy-client-ca.key I0422 21:30:00.907737 1839 certs.go:256] generating profile certs ... I0422 21:30:00.907783 1839 certs.go:363] generating signed profile cert for "minikube-user": /Users/cirix/.minikube/profiles/minikube/client.key I0422 21:30:00.907790 1839 crypto.go:68] Generating cert /Users/cirix/.minikube/profiles/minikube/client.crt with IP's: [] I0422 21:30:01.042997 1839 crypto.go:156] Writing cert to /Users/cirix/.minikube/profiles/minikube/client.crt ... I0422 21:30:01.043015 1839 lock.go:35] WriteFile acquiring /Users/cirix/.minikube/profiles/minikube/client.crt: {Name:mkc0c3697825871f3850450e02b6f786f329a10f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0422 21:30:01.043315 1839 crypto.go:164] Writing key to /Users/cirix/.minikube/profiles/minikube/client.key ... I0422 21:30:01.043319 1839 lock.go:35] WriteFile acquiring /Users/cirix/.minikube/profiles/minikube/client.key: {Name:mk023df22fb3444d85c2f968f99c02c259e24b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0422 21:30:01.043477 1839 certs.go:363] generating signed profile cert for "minikube": /Users/cirix/.minikube/profiles/minikube/apiserver.key.67c50624 I0422 21:30:01.043485 1839 crypto.go:68] Generating cert /Users/cirix/.minikube/profiles/minikube/apiserver.crt.67c50624 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.22] I0422 21:30:01.213623 1839 crypto.go:156] Writing cert to /Users/cirix/.minikube/profiles/minikube/apiserver.crt.67c50624 ... I0422 21:30:01.213627 1839 lock.go:35] WriteFile acquiring /Users/cirix/.minikube/profiles/minikube/apiserver.crt.67c50624: {Name:mkc5afddf973a62f8f64f813a6e28e76d536bf80 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0422 21:30:01.213909 1839 crypto.go:164] Writing key to /Users/cirix/.minikube/profiles/minikube/apiserver.key.67c50624 ... I0422 21:30:01.213912 1839 lock.go:35] WriteFile acquiring /Users/cirix/.minikube/profiles/minikube/apiserver.key.67c50624: {Name:mkb8d54e94380d606f9a44488f9cbf2787640e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0422 21:30:01.214081 1839 certs.go:381] copying /Users/cirix/.minikube/profiles/minikube/apiserver.crt.67c50624 -> /Users/cirix/.minikube/profiles/minikube/apiserver.crt I0422 21:30:01.214379 1839 certs.go:385] copying /Users/cirix/.minikube/profiles/minikube/apiserver.key.67c50624 -> /Users/cirix/.minikube/profiles/minikube/apiserver.key I0422 21:30:01.214516 1839 certs.go:363] generating signed profile cert for "aggregator": /Users/cirix/.minikube/profiles/minikube/proxy-client.key I0422 21:30:01.214524 1839 crypto.go:68] Generating cert /Users/cirix/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0422 21:30:01.315348 1839 crypto.go:156] Writing cert to /Users/cirix/.minikube/profiles/minikube/proxy-client.crt ... I0422 21:30:01.315350 1839 lock.go:35] WriteFile acquiring /Users/cirix/.minikube/profiles/minikube/proxy-client.crt: {Name:mk0c5b3287fb2418a00f078ecc550cd069fb4557 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0422 21:30:01.315522 1839 crypto.go:164] Writing key to /Users/cirix/.minikube/profiles/minikube/proxy-client.key ... I0422 21:30:01.315524 1839 lock.go:35] WriteFile acquiring /Users/cirix/.minikube/profiles/minikube/proxy-client.key: {Name:mk745ad3e5469ef43b5fd25db4f64e06045ff717 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0422 21:30:01.315855 1839 certs.go:484] found cert: /Users/cirix/.minikube/certs/ca-key.pem (1679 bytes) I0422 21:30:01.315892 1839 certs.go:484] found cert: /Users/cirix/.minikube/certs/ca.pem (1074 bytes) I0422 21:30:01.315922 1839 certs.go:484] found cert: /Users/cirix/.minikube/certs/cert.pem (1119 bytes) I0422 21:30:01.315951 1839 certs.go:484] found cert: /Users/cirix/.minikube/certs/key.pem (1679 bytes) I0422 21:30:01.316532 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0422 21:30:01.330432 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0422 21:30:01.342212 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0422 21:30:01.353934 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0422 21:30:01.366026 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes) I0422 21:30:01.377779 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0422 21:30:01.389020 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0422 21:30:01.400268 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0422 21:30:01.411226 1839 ssh_runner.go:362] scp /Users/cirix/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0422 21:30:01.422372 1839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (740 bytes) I0422 21:30:01.430256 1839 ssh_runner.go:195] Run: openssl version I0422 21:30:01.433007 1839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0422 21:30:01.437438 1839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0422 21:30:01.439285 1839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 7 18:21 /usr/share/ca-certificates/minikubeCA.pem I0422 21:30:01.439303 1839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0422 21:30:01.441857 1839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0422 21:30:01.448384 1839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt I0422 21:30:01.450017 1839 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1 stdout: stderr: stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory I0422 21:30:01.450059 1839 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:12288 CPUs:6 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/homebrew/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/opt/homebrew/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0422 21:30:01.450122 1839 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0422 21:30:01.460159 1839 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0422 21:30:01.464711 1839 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0422 21:30:01.469371 1839 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0422 21:30:01.474253 1839 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0422 21:30:01.474256 1839 kubeadm.go:156] found existing configuration files: I0422 21:30:01.474292 1839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0422 21:30:01.479082 1839 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/admin.conf: No such file or directory I0422 21:30:01.479139 1839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf I0422 21:30:01.483486 1839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0422 21:30:01.487769 1839 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/kubelet.conf: No such file or directory I0422 21:30:01.487801 1839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf I0422 21:30:01.492022 1839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0422 21:30:01.496197 1839 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/controller-manager.conf: No such file or directory I0422 21:30:01.496242 1839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0422 21:30:01.500493 1839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0422 21:30:01.504682 1839 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/scheduler.conf: No such file or directory I0422 21:30:01.504721 1839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0422 21:30:01.508779 1839 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem" I0422 21:30:01.541139 1839 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0 I0422 21:30:01.541199 1839 kubeadm.go:309] [preflight] Running pre-flight checks I0422 21:30:01.625618 1839 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster I0422 21:30:01.625679 1839 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection I0422 21:30:01.625807 1839 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0422 21:30:01.780398 1839 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0422 21:30:01.790742 1839 out.go:204] ▪ Generating certificates and keys ... I0422 21:30:01.790800 1839 kubeadm.go:309] [certs] Using existing ca certificate authority I0422 21:30:01.790855 1839 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk I0422 21:30:01.888962 1839 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key I0422 21:30:01.924155 1839 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key I0422 21:30:02.040662 1839 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key I0422 21:30:02.371617 1839 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key I0422 21:30:02.579934 1839 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key I0422 21:30:02.579996 1839 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.105.22 127.0.0.1 ::1] I0422 21:30:02.851132 1839 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key I0422 21:30:02.851232 1839 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.105.22 127.0.0.1 ::1] I0422 21:30:03.231310 1839 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key I0422 21:30:03.504255 1839 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key I0422 21:30:03.693231 1839 kubeadm.go:309] [certs] Generating "sa" key and public key I0422 21:30:03.693282 1839 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0422 21:30:03.777304 1839 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file I0422 21:30:03.821062 1839 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file I0422 21:30:03.903574 1839 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0422 21:30:04.062067 1839 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0422 21:30:04.134346 1839 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0422 21:30:04.134653 1839 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0422 21:30:04.136068 1839 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0422 21:30:04.144421 1839 out.go:204] ▪ Booting up control plane ... I0422 21:30:04.144495 1839 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver" I0422 21:30:04.144545 1839 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0422 21:30:04.144594 1839 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler" I0422 21:30:04.145804 1839 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0422 21:30:04.146304 1839 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0422 21:30:04.146325 1839 kubeadm.go:309] [kubelet-start] Starting the kubelet I0422 21:30:04.242936 1839 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" I0422 21:30:04.242988 1839 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s I0422 21:30:04.744117 1839 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.132917ms I0422 21:30:04.744197 1839 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s I0422 21:30:08.245461 1839 kubeadm.go:309] [api-check] The API server is healthy after 3.501259668s I0422 21:30:08.253772 1839 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0422 21:30:08.259739 1839 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0422 21:30:08.269639 1839 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs I0422 21:30:08.269770 1839 kubeadm.go:309] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0422 21:30:08.274669 1839 kubeadm.go:309] [bootstrap-token] Using token: 28fazc.gmhrbxg9in1hibiv I0422 21:30:08.281427 1839 out.go:204] ▪ Configuring RBAC rules ... I0422 21:30:08.281543 1839 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0422 21:30:08.282613 1839 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0422 21:30:08.288674 1839 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0422 21:30:08.291534 1839 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0422 21:30:08.293494 1839 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0422 21:30:08.295053 1839 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0422 21:30:08.650211 1839 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0422 21:30:09.062476 1839 kubeadm.go:309] [addons] Applied essential addon: CoreDNS I0422 21:30:09.651790 1839 kubeadm.go:309] [addons] Applied essential addon: kube-proxy I0422 21:30:09.652454 1839 kubeadm.go:309] I0422 21:30:09.652499 1839 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully! I0422 21:30:09.652502 1839 kubeadm.go:309] I0422 21:30:09.652544 1839 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user: I0422 21:30:09.652547 1839 kubeadm.go:309] I0422 21:30:09.652598 1839 kubeadm.go:309] mkdir -p $HOME/.kube I0422 21:30:09.652652 1839 kubeadm.go:309] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0422 21:30:09.652695 1839 kubeadm.go:309] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0422 21:30:09.652698 1839 kubeadm.go:309] I0422 21:30:09.652749 1839 kubeadm.go:309] Alternatively, if you are the root user, you can run: I0422 21:30:09.652755 1839 kubeadm.go:309] I0422 21:30:09.652813 1839 kubeadm.go:309] export KUBECONFIG=/etc/kubernetes/admin.conf I0422 21:30:09.652817 1839 kubeadm.go:309] I0422 21:30:09.652876 1839 kubeadm.go:309] You should now deploy a pod network to the cluster. I0422 21:30:09.652940 1839 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0422 21:30:09.652985 1839 kubeadm.go:309] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0422 21:30:09.652992 1839 kubeadm.go:309] I0422 21:30:09.653067 1839 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities I0422 21:30:09.653179 1839 kubeadm.go:309] and service account keys on each node and then running the following as root: I0422 21:30:09.653183 1839 kubeadm.go:309] I0422 21:30:09.653255 1839 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 28fazc.gmhrbxg9in1hibiv \ I0422 21:30:09.653375 1839 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:075967c345c8bec7984162713514fe5cc10e03837f574f73ee1e8a89b3e6dab8 \ I0422 21:30:09.653392 1839 kubeadm.go:309] --control-plane I0422 21:30:09.653395 1839 kubeadm.go:309] I0422 21:30:09.653447 1839 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root: I0422 21:30:09.653449 1839 kubeadm.go:309] I0422 21:30:09.653496 1839 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 28fazc.gmhrbxg9in1hibiv \ I0422 21:30:09.653583 1839 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:075967c345c8bec7984162713514fe5cc10e03837f574f73ee1e8a89b3e6dab8 I0422 21:30:09.654004 1839 kubeadm.go:309] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0422 21:30:09.654015 1839 cni.go:84] Creating CNI manager for "" I0422 21:30:09.654025 1839 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0422 21:30:09.658825 1839 out.go:177] 🔗 Configuring bridge CNI (Container Networking Interface) ... I0422 21:30:09.666821 1839 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0422 21:30:09.673851 1839 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes) I0422 21:30:09.683659 1839 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0422 21:30:09.683743 1839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0422 21:30:09.683764 1839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes minikube minikube.k8s.io/updated_at=2024_04_22T21_30_09_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=86fc9d54fca63f295d8737c8eacdbb7987e89c67 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true I0422 21:30:09.751943 1839 kubeadm.go:1107] duration metric: took 68.279542ms to wait for elevateKubeSystemPrivileges I0422 21:30:09.751966 1839 ops.go:34] apiserver oom_adj: -16 W0422 21:30:09.751991 1839 kubeadm.go:286] apiserver tunnel failed: apiserver port not set I0422 21:30:09.751995 1839 kubeadm.go:393] duration metric: took 8.301976042s to StartCluster I0422 21:30:09.752004 1839 settings.go:142] acquiring lock: {Name:mkdb4d994466ed19f039fbb1bc27072cb949782b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0422 21:30:09.752181 1839 settings.go:150] Updating kubeconfig: /Users/cirix/.kube/config I0422 21:30:09.752605 1839 lock.go:35] WriteFile acquiring /Users/cirix/.kube/config: {Name:mkda3b8ede3fb6317a36e3a5d81bc2ac93d3bf50 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0422 21:30:09.752814 1839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0422 21:30:09.752844 1839 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0422 21:30:09.756658 1839 out.go:177] 🔎 Verifying Kubernetes components... I0422 21:30:09.752859 1839 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] I0422 21:30:09.752991 1839 config.go:182] Loaded profile config "minikube": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0422 21:30:09.764744 1839 addons.go:69] Setting storage-provisioner=true in profile "minikube" I0422 21:30:09.764746 1839 addons.go:69] Setting default-storageclass=true in profile "minikube" I0422 21:30:09.764756 1839 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0422 21:30:09.764763 1839 addons.go:234] Setting addon storage-provisioner=true in "minikube" I0422 21:30:09.764789 1839 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0422 21:30:09.764800 1839 host.go:66] Checking if "minikube" exists ... I0422 21:30:09.770865 1839 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0422 21:30:09.772664 1839 addons.go:234] Setting addon default-storageclass=true in "minikube" I0422 21:30:09.774664 1839 host.go:66] Checking if "minikube" exists ... I0422 21:30:09.774689 1839 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml I0422 21:30:09.774693 1839 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0422 21:30:09.774700 1839 sshutil.go:53] new ssh client: &{IP:192.168.105.22 Port:22 SSHKeyPath:/Users/cirix/.minikube/machines/minikube/id_rsa Username:docker} I0422 21:30:09.775650 1839 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml I0422 21:30:09.775654 1839 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0422 21:30:09.775658 1839 sshutil.go:53] new ssh client: &{IP:192.168.105.22 Port:22 SSHKeyPath:/Users/cirix/.minikube/machines/minikube/id_rsa Username:docker} I0422 21:30:09.803373 1839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.105.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0422 21:30:09.867991 1839 ssh_runner.go:195] Run: sudo systemctl start kubelet I0422 21:30:09.952096 1839 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0422 21:30:09.955678 1839 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0422 21:30:09.970783 1839 start.go:946] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap I0422 21:30:09.971311 1839 api_server.go:52] waiting for apiserver process to appear ... I0422 21:30:09.971365 1839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0422 21:30:10.159328 1839 api_server.go:72] duration metric: took 406.473833ms to wait for apiserver process to appear ... I0422 21:30:10.159335 1839 api_server.go:88] waiting for apiserver healthz status ... I0422 21:30:10.159344 1839 api_server.go:253] Checking apiserver healthz at https://192.168.105.22:8443/healthz ... I0422 21:30:10.167372 1839 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass I0422 21:30:10.162937 1839 api_server.go:279] https://192.168.105.22:8443/healthz returned 200: ok I0422 21:30:10.175361 1839 addons.go:505] duration metric: took 422.513959ms for enable addons: enabled=[storage-provisioner default-storageclass] I0422 21:30:10.168020 1839 api_server.go:141] control plane version: v1.30.0 I0422 21:30:10.175373 1839 api_server.go:131] duration metric: took 16.035292ms to wait for apiserver health ... I0422 21:30:10.175397 1839 system_pods.go:43] waiting for kube-system pods to appear ... I0422 21:30:10.179917 1839 system_pods.go:59] 5 kube-system pods found I0422 21:30:10.179925 1839 system_pods.go:61] "etcd-minikube" [7fa557b2-a9a3-45a4-942c-d8f5159f9150] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0422 21:30:10.179929 1839 system_pods.go:61] "kube-apiserver-minikube" [1041f95b-f921-456f-8265-d6be765acaf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0422 21:30:10.179936 1839 system_pods.go:61] "kube-controller-manager-minikube" [83991eea-b698-4b73-86d6-f2f276997d9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0422 21:30:10.179938 1839 system_pods.go:61] "kube-scheduler-minikube" [0b332a6a-8fad-45be-a08e-09215cd2d800] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0422 21:30:10.179940 1839 system_pods.go:61] "storage-provisioner" [b2d286f7-a7fa-4f1f-9be6-6433b205afe4] Pending I0422 21:30:10.179942 1839 system_pods.go:74] duration metric: took 4.543ms to wait for pod list to return data ... I0422 21:30:10.179946 1839 kubeadm.go:576] duration metric: took 427.094041ms to wait for: map[apiserver:true system_pods:true] I0422 21:30:10.179952 1839 node_conditions.go:102] verifying NodePressure condition ... I0422 21:30:10.181856 1839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki I0422 21:30:10.181862 1839 node_conditions.go:123] node cpu capacity is 6 I0422 21:30:10.181870 1839 node_conditions.go:105] duration metric: took 1.917042ms to run NodePressure ... I0422 21:30:10.181875 1839 start.go:240] waiting for startup goroutines ... I0422 21:30:10.473567 1839 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0422 21:30:10.473584 1839 start.go:245] waiting for cluster config update ... I0422 21:30:10.473596 1839 start.go:254] writing updated cluster config ... I0422 21:30:10.474005 1839 ssh_runner.go:195] Run: rm -f paused I0422 21:30:10.520686 1839 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0) I0422 21:30:10.524602 1839 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default ==> Docker <== Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.930493217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.930518884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.930529175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.930563342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.940520384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.947781842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.948081342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.948115092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.948239717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.955107717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.955128592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:04 minikube dockerd[1267]: time="2024-04-22T19:30:04.955169384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:04 minikube cri-dockerd[1151]: time="2024-04-22T19:30:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f1ec85f8d9b29e7222f5e37482482a321d86edf593cc0aeb066a1cb14a2667e1/resolv.conf as [nameserver 192.168.105.1]" Apr 22 19:30:04 minikube cri-dockerd[1151]: time="2024-04-22T19:30:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bfd702028e0e59c872d077c126495084cebef4dac132a43843d5f23ab51f534c/resolv.conf as [nameserver 192.168.105.1]" Apr 22 19:30:05 minikube cri-dockerd[1151]: time="2024-04-22T19:30:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e9cafea3f2e51fe571eaf502e9914373e4580814220163f8c5b77b391c601b76/resolv.conf as [nameserver 192.168.105.1]" Apr 22 19:30:05 minikube cri-dockerd[1151]: time="2024-04-22T19:30:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f029276f3141202ccd061b0bbd700180e487d986f9b1ac5e1ec8eb56e15516e9/resolv.conf as [nameserver 192.168.105.1]" Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.058141217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.058204675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.058222550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.058402009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.103313842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.103602176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.103628301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.103838801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.126640592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.126857134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.126874926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.127138176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.134941467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.135099467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.135110884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:05 minikube dockerd[1267]: time="2024-04-22T19:30:05.135164134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:21.999785850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:21.999832225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:21.999845142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:21.999890434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:22 minikube cri-dockerd[1151]: time="2024-04-22T19:30:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/508da2b6e21b9c5c0d20a8a10e60190e27f3292a10efed13d23c595147b33749/resolv.conf as [nameserver 192.168.105.1]" Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:22.109063693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:22.109101405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:22.109112874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:22.109176358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:22.890935834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:22.891068343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:22.891082776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:22 minikube dockerd[1267]: time="2024-04-22T19:30:22.891126287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:22 minikube cri-dockerd[1151]: time="2024-04-22T19:30:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b00b76c48a6572df8f95f71042fbc7b9efc31370cac6eb41c07348eb3f003b4a/resolv.conf as [nameserver 192.168.105.1]" Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.005936219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.005988768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.006000065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.006046965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.007951169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.008072656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.008141851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.008214340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:23 minikube cri-dockerd[1151]: time="2024-04-22T19:30:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/081e2f9a039c83d1950e5c9b0e9d6ac32920acd3e77c2f1ae48fe19403fa97ff/resolv.conf as [nameserver 192.168.105.1]" Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.196229784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.196274391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.196284751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:23 minikube dockerd[1267]: time="2024-04-22T19:30:23.196322081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 22 19:30:29 minikube cri-dockerd[1151]: time="2024-04-22T19:30:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD b42df19232071 2437cf7621777 20 seconds ago Running coredns 0 081e2f9a039c8 coredns-7db6d8ff4d-d6s8k e1920849c4cc0 cb7eac0b42cc1 21 seconds ago Running kube-proxy 0 b00b76c48a657 kube-proxy-cfzjl 8c8572225f9b3 ba04bb24b9575 21 seconds ago Running storage-provisioner 0 508da2b6e21b9 storage-provisioner 1166ac2c5352d 547adae34140b 38 seconds ago Running kube-scheduler 0 f029276f31412 kube-scheduler-minikube 3d3b4bad8e1c3 181f57fd3cdb7 38 seconds ago Running kube-apiserver 0 e9cafea3f2e51 kube-apiserver-minikube a826d2b1505d9 68feac521c0f1 39 seconds ago Running kube-controller-manager 0 bfd702028e0e5 kube-controller-manager-minikube f7d26232c2013 014faa467e297 39 seconds ago Running etcd 0 f1ec85f8d9b29 etcd-minikube ==> coredns [b42df1923207] <== [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API .:53 [INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b CoreDNS-1.11.1 linux/arm64, go1.20.7, ae2bbc2 [INFO] 127.0.0.1:46988 - 4745 "HINFO IN 6891540954185918429.6948812748665889829. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034883686s [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" ==> describe nodes <== Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=arm64 beta.kubernetes.io/os=linux kubernetes.io/arch=arm64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=86fc9d54fca63f295d8737c8eacdbb7987e89c67 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_04_22T21_30_09_0700 minikube.k8s.io/version=v1.33.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 22 Apr 2024 19:30:06 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Mon, 22 Apr 2024 19:30:39 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 22 Apr 2024 19:30:29 +0000 Mon, 22 Apr 2024 19:30:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 22 Apr 2024 19:30:29 +0000 Mon, 22 Apr 2024 19:30:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 22 Apr 2024 19:30:29 +0000 Mon, 22 Apr 2024 19:30:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 22 Apr 2024 19:30:29 +0000 Mon, 22 Apr 2024 19:30:10 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.105.22 Hostname: minikube Capacity: cpu: 6 ephemeral-storage: 17734596Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 12225016Ki pods: 110 Allocatable: cpu: 6 ephemeral-storage: 17734596Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 12225016Ki pods: 110 System Info: Machine ID: b141f872af8240e6be087e5b986575d1 System UUID: b141f872af8240e6be087e5b986575d1 Boot ID: c8805a7c-766d-4967-82d6-3a8636721353 Kernel Version: 5.10.207 OS Image: Buildroot 2023.02.9 Operating System: linux Architecture: arm64 Container Runtime Version: docker://26.0.1 Kubelet Version: v1.30.0 Kube-Proxy Version: v1.30.0 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-7db6d8ff4d-d6s8k 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (1%!)(MISSING) 21s kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 35s kube-system kube-apiserver-minikube 250m (4%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35s kube-system kube-controller-manager-minikube 200m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 35s kube-system kube-proxy-cfzjl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 21s kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 36s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 34s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (12%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (1%!)(MISSING) 170Mi (1%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 20s kube-proxy Normal Starting 35s kubelet Starting kubelet. Normal NodeAllocatableEnforced 35s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 35s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 35s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 35s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeReady 33s kubelet Node minikube status is now: NodeReady Normal RegisteredNode 22s node-controller Node minikube event: Registered Node minikube in Controller ==> dmesg <== [Apr22 19:29] ACPI: SRAT not present [ +0.000000] KASLR disabled due to lack of seed [ +1.149317] EINJ: EINJ table not found. [ +0.673104] systemd-fstab-generator[141]: Ignoring "noauto" option for root device [ +0.171841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +0.001454] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db [ +4.111889] systemd-fstab-generator[556]: Ignoring "noauto" option for root device [ +0.093469] systemd-fstab-generator[568]: Ignoring "noauto" option for root device [ +1.580792] systemd-fstab-generator[859]: Ignoring "noauto" option for root device [ +0.221934] systemd-fstab-generator[899]: Ignoring "noauto" option for root device [ +0.103715] systemd-fstab-generator[910]: Ignoring "noauto" option for root device [ +0.109195] systemd-fstab-generator[924]: Ignoring "noauto" option for root device [ +2.153194] kauditd_printk_skb: 151 callbacks suppressed [ +0.142275] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device [ +0.088459] systemd-fstab-generator[1116]: Ignoring "noauto" option for root device [ +0.083181] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device [ +0.104604] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device [ +2.860152] systemd-fstab-generator[1249]: Ignoring "noauto" option for root device [Apr22 19:30] kauditd_printk_skb: 139 callbacks suppressed [ +0.192827] systemd-fstab-generator[1462]: Ignoring "noauto" option for root device [ +3.339668] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device [ +4.517495] systemd-fstab-generator[2093]: Ignoring "noauto" option for root device [ +0.056680] kauditd_printk_skb: 125 callbacks suppressed [ +1.042189] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device [ +12.405989] kauditd_printk_skb: 34 callbacks suppressed ==> etcd [f7d26232c201] <== {"level":"warn","ts":"2024-04-22T19:30:05.156377Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-04-22T19:30:05.156432Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.105.22:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.105.22:2380","--initial-cluster=minikube=https://192.168.105.22:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.105.22:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.105.22:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"warn","ts":"2024-04-22T19:30:05.156466Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-04-22T19:30:05.156472Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.105.22:2380"]} {"level":"info","ts":"2024-04-22T19:30:05.156485Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-04-22T19:30:05.157169Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.22:2379"]} {"level":"info","ts":"2024-04-22T19:30:05.157356Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"arm64","max-cpu-set":6,"max-cpu-available":6,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.105.22:2380"],"listen-peer-urls":["https://192.168.105.22:2380"],"advertise-client-urls":["https://192.168.105.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.105.22:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2024-04-22T19:30:05.159499Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"2.027334ms"} {"level":"info","ts":"2024-04-22T19:30:05.166088Z","caller":"etcdserver/raft.go:495","msg":"starting local member","local-member-id":"9ace6eaa0e084bd4","cluster-id":"37e3e3973969d68f"} {"level":"info","ts":"2024-04-22T19:30:05.166178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 switched to configuration voters=()"} {"level":"info","ts":"2024-04-22T19:30:05.166334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 became follower at term 0"} {"level":"info","ts":"2024-04-22T19:30:05.166378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9ace6eaa0e084bd4 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2024-04-22T19:30:05.166474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 became follower at term 1"} {"level":"info","ts":"2024-04-22T19:30:05.166553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 switched to configuration voters=(11154975003702217684)"} {"level":"warn","ts":"2024-04-22T19:30:05.170073Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2024-04-22T19:30:05.171955Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2024-04-22T19:30:05.172792Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2024-04-22T19:30:05.17471Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"9ace6eaa0e084bd4","local-server-version":"3.5.12","cluster-version":"to_be_decided"} {"level":"info","ts":"2024-04-22T19:30:05.174921Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9ace6eaa0e084bd4","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2024-04-22T19:30:05.174951Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2024-04-22T19:30:05.175087Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2024-04-22T19:30:05.175092Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2024-04-22T19:30:05.178805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 switched to configuration voters=(11154975003702217684)"} {"level":"info","ts":"2024-04-22T19:30:05.17893Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"37e3e3973969d68f","local-member-id":"9ace6eaa0e084bd4","added-peer-id":"9ace6eaa0e084bd4","added-peer-peer-urls":["https://192.168.105.22:2380"]} {"level":"info","ts":"2024-04-22T19:30:05.181118Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-04-22T19:30:05.181278Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9ace6eaa0e084bd4","initial-advertise-peer-urls":["https://192.168.105.22:2380"],"listen-peer-urls":["https://192.168.105.22:2380"],"advertise-client-urls":["https://192.168.105.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2024-04-22T19:30:05.181322Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2024-04-22T19:30:05.181631Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.22:2380"} {"level":"info","ts":"2024-04-22T19:30:05.18165Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.22:2380"} {"level":"info","ts":"2024-04-22T19:30:05.66771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 is starting a new election at term 1"} {"level":"info","ts":"2024-04-22T19:30:05.667742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 became pre-candidate at term 1"} {"level":"info","ts":"2024-04-22T19:30:05.667757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 received MsgPreVoteResp from 9ace6eaa0e084bd4 at term 1"} {"level":"info","ts":"2024-04-22T19:30:05.667765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 became candidate at term 2"} {"level":"info","ts":"2024-04-22T19:30:05.667768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 received MsgVoteResp from 9ace6eaa0e084bd4 at term 2"} {"level":"info","ts":"2024-04-22T19:30:05.667782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ace6eaa0e084bd4 became leader at term 2"} {"level":"info","ts":"2024-04-22T19:30:05.667787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9ace6eaa0e084bd4 elected leader 9ace6eaa0e084bd4 at term 2"} {"level":"info","ts":"2024-04-22T19:30:05.668947Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9ace6eaa0e084bd4","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.105.22:2379]}","request-path":"/0/members/9ace6eaa0e084bd4/attributes","cluster-id":"37e3e3973969d68f","publish-timeout":"7s"} {"level":"info","ts":"2024-04-22T19:30:05.669082Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-04-22T19:30:05.669136Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-04-22T19:30:05.669184Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2024-04-22T19:30:05.669215Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2024-04-22T19:30:05.669256Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2024-04-22T19:30:05.669809Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"37e3e3973969d68f","local-member-id":"9ace6eaa0e084bd4","cluster-version":"3.5"} {"level":"info","ts":"2024-04-22T19:30:05.669851Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2024-04-22T19:30:05.670938Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2024-04-22T19:30:05.670378Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.22:2379"} {"level":"info","ts":"2024-04-22T19:30:05.670867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"} ==> kernel <== 19:30:43 up 0 min, 0 users, load average: 0.75, 0.22, 0.08 Linux minikube 5.10.207 #1 SMP PREEMPT Thu Apr 18 19:10:12 UTC 2024 aarch64 GNU/Linux PRETTY_NAME="Buildroot 2023.02.9" ==> kube-apiserver [3d3b4bad8e1c] <== I0422 19:30:06.254144 1 controller.go:80] Starting OpenAPI V3 AggregationController I0422 19:30:06.254180 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0422 19:30:06.254221 1 system_namespaces_controller.go:67] Starting system namespaces controller I0422 19:30:06.254255 1 aggregator.go:163] waiting for initial CRD sync... I0422 19:30:06.254221 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0422 19:30:06.254279 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0422 19:30:06.254346 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0422 19:30:06.254359 1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister I0422 19:30:06.254366 1 controller.go:116] Starting legacy_token_tracking_controller I0422 19:30:06.254374 1 shared_informer.go:313] Waiting for caches to sync for configmaps I0422 19:30:06.254456 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0422 19:30:06.254469 1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller I0422 19:30:06.254549 1 apf_controller.go:374] Starting API Priority and Fairness config controller I0422 19:30:06.254589 1 gc_controller.go:78] Starting apiserver lease garbage collector I0422 19:30:06.254699 1 available_controller.go:423] Starting AvailableConditionController I0422 19:30:06.254708 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0422 19:30:06.254717 1 controller.go:78] Starting OpenAPI AggregationController I0422 19:30:06.254742 1 customresource_discovery_controller.go:289] Starting DiscoveryController I0422 19:30:06.254891 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0422 19:30:06.254935 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0422 19:30:06.255186 1 controller.go:139] Starting OpenAPI controller I0422 19:30:06.255213 1 controller.go:87] Starting OpenAPI V3 controller I0422 19:30:06.255224 1 naming_controller.go:291] Starting NamingConditionController I0422 19:30:06.255233 1 establishing_controller.go:76] Starting EstablishingController I0422 19:30:06.255244 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0422 19:30:06.255249 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0422 19:30:06.255256 1 crd_finalizer.go:266] Starting CRDFinalizer E0422 19:30:06.324849 1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms" I0422 19:30:06.348200 1 shared_informer.go:320] Caches are synced for node_authorizer I0422 19:30:06.351383 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator] I0422 19:30:06.351426 1 policy_source.go:224] refreshing policies I0422 19:30:06.354668 1 shared_informer.go:320] Caches are synced for crd-autoregister I0422 19:30:06.354745 1 cache.go:39] Caches are synced for AvailableConditionController controller I0422 19:30:06.354770 1 aggregator.go:165] initial CRD sync complete... I0422 19:30:06.354794 1 autoregister_controller.go:141] Starting autoregister controller I0422 19:30:06.354798 1 cache.go:32] Waiting for caches to sync for autoregister controller I0422 19:30:06.354801 1 cache.go:39] Caches are synced for autoregister controller I0422 19:30:06.354809 1 apf_controller.go:379] Running API Priority and Fairness config worker I0422 19:30:06.354813 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process I0422 19:30:06.355061 1 shared_informer.go:320] Caches are synced for configmaps I0422 19:30:06.355066 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller I0422 19:30:06.355407 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0422 19:30:06.355480 1 handler_discovery.go:447] Starting ResourceDiscoveryManager I0422 19:30:06.355656 1 controller.go:615] quota admission added evaluator for: namespaces I0422 19:30:06.527650 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io I0422 19:30:07.258584 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0422 19:30:07.261607 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0422 19:30:07.261617 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0422 19:30:07.547595 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0422 19:30:07.569865 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0422 19:30:07.668778 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"} W0422 19:30:07.673592 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.22] I0422 19:30:07.674133 1 controller.go:615] quota admission added evaluator for: endpoints I0422 19:30:07.676769 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io I0422 19:30:08.267731 1 controller.go:615] quota admission added evaluator for: serviceaccounts I0422 19:30:08.874133 1 controller.go:615] quota admission added evaluator for: deployments.apps I0422 19:30:08.880311 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"} I0422 19:30:08.884684 1 controller.go:615] quota admission added evaluator for: daemonsets.apps I0422 19:30:22.473067 1 controller.go:615] quota admission added evaluator for: replicasets.apps I0422 19:30:22.522617 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps ==> kube-controller-manager [a826d2b1505d] <== I0422 19:30:21.420769 1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller" I0422 19:30:21.420810 1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller" I0422 19:30:21.420819 1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller I0422 19:30:21.423564 1 shared_informer.go:313] Waiting for caches to sync for resource quota I0422 19:30:21.429386 1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"minikube\" does not exist" I0422 19:30:21.432473 1 shared_informer.go:320] Caches are synced for TTL after finished I0422 19:30:21.434545 1 shared_informer.go:313] Waiting for caches to sync for garbage collector I0422 19:30:21.436310 1 shared_informer.go:320] Caches are synced for crt configmap I0422 19:30:21.458826 1 shared_informer.go:320] Caches are synced for expand I0422 19:30:21.470366 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status I0422 19:30:21.471721 1 shared_informer.go:320] Caches are synced for TTL I0422 19:30:21.473698 1 shared_informer.go:320] Caches are synced for namespace I0422 19:30:21.478348 1 shared_informer.go:320] Caches are synced for cronjob I0422 19:30:21.485261 1 shared_informer.go:320] Caches are synced for PV protection I0422 19:30:21.504386 1 shared_informer.go:320] Caches are synced for bootstrap_signer I0422 19:30:21.522117 1 shared_informer.go:320] Caches are synced for service account I0422 19:30:21.522168 1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator I0422 19:30:21.527743 1 shared_informer.go:320] Caches are synced for node I0422 19:30:21.527778 1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller" I0422 19:30:21.527787 1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller" I0422 19:30:21.527790 1 shared_informer.go:313] Waiting for caches to sync for cidrallocator I0422 19:30:21.527793 1 shared_informer.go:320] Caches are synced for cidrallocator I0422 19:30:21.532901 1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="minikube" podCIDRs=["10.244.0.0/24"] I0422 19:30:21.623863 1 shared_informer.go:320] Caches are synced for resource quota I0422 19:30:21.641250 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring I0422 19:30:21.642931 1 shared_informer.go:320] Caches are synced for taint I0422 19:30:21.643032 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone="" I0422 19:30:21.643070 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="minikube" I0422 19:30:21.643097 1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal" I0422 19:30:21.647449 1 shared_informer.go:320] Caches are synced for persistent volume I0422 19:30:21.651624 1 shared_informer.go:320] Caches are synced for attach detach I0422 19:30:21.670957 1 shared_informer.go:320] Caches are synced for PVC protection I0422 19:30:21.671058 1 shared_informer.go:320] Caches are synced for endpoint I0422 19:30:21.672382 1 shared_informer.go:320] Caches are synced for stateful set I0422 19:30:21.672424 1 shared_informer.go:320] Caches are synced for endpoint_slice I0422 19:30:21.673710 1 shared_informer.go:320] Caches are synced for job I0422 19:30:21.674520 1 shared_informer.go:320] Caches are synced for disruption I0422 19:30:21.681658 1 shared_informer.go:320] Caches are synced for certificate-csrapproving I0422 19:30:21.683006 1 shared_informer.go:320] Caches are synced for resource quota I0422 19:30:21.689354 1 shared_informer.go:320] Caches are synced for ephemeral I0422 19:30:21.696819 1 shared_informer.go:320] Caches are synced for HPA I0422 19:30:21.720833 1 shared_informer.go:320] Caches are synced for GC I0422 19:30:21.721254 1 shared_informer.go:320] Caches are synced for ReplicaSet I0422 19:30:21.721295 1 shared_informer.go:320] Caches are synced for daemon sets I0422 19:30:21.721537 1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner I0422 19:30:21.721612 1 shared_informer.go:320] Caches are synced for ReplicationController I0422 19:30:21.721624 1 shared_informer.go:320] Caches are synced for taint-eviction-controller I0422 19:30:21.721677 1 shared_informer.go:320] Caches are synced for deployment I0422 19:30:21.769894 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client I0422 19:30:21.770172 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client I0422 19:30:21.770318 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving I0422 19:30:21.770467 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown I0422 19:30:22.135107 1 shared_informer.go:320] Caches are synced for garbage collector I0422 19:30:22.156631 1 shared_informer.go:320] Caches are synced for garbage collector I0422 19:30:22.156648 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller" I0422 19:30:22.629296 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="153.061699ms" I0422 19:30:22.635860 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.452976ms" I0422 19:30:22.635894 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.334µs" I0422 19:30:22.638943 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.893µs" I0422 19:30:23.864138 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.76µs" ==> kube-proxy [e1920849c4cc] <== I0422 19:30:23.086303 1 server_linux.go:69] "Using iptables proxy" I0422 19:30:23.094592 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.22"] I0422 19:30:23.118767 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6" I0422 19:30:23.118793 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4" I0422 19:30:23.118805 1 server_linux.go:165] "Using iptables Proxier" I0422 19:30:23.120850 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0422 19:30:23.120977 1 server.go:872] "Version info" version="v1.30.0" I0422 19:30:23.120996 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0422 19:30:23.121567 1 config.go:192] "Starting service config controller" I0422 19:30:23.121610 1 shared_informer.go:313] Waiting for caches to sync for service config I0422 19:30:23.121648 1 config.go:319] "Starting node config controller" I0422 19:30:23.121654 1 shared_informer.go:313] Waiting for caches to sync for node config I0422 19:30:23.121843 1 config.go:101] "Starting endpoint slice config controller" I0422 19:30:23.121856 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config I0422 19:30:23.222241 1 shared_informer.go:320] Caches are synced for endpoint slice config I0422 19:30:23.222458 1 shared_informer.go:320] Caches are synced for node config I0422 19:30:23.222474 1 shared_informer.go:320] Caches are synced for service config ==> kube-scheduler [1166ac2c5352] <== W0422 19:30:06.262422 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0422 19:30:06.262428 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. W0422 19:30:06.262432 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0422 19:30:06.290757 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0" I0422 19:30:06.290778 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0422 19:30:06.292921 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0422 19:30:06.292933 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259 I0422 19:30:06.292952 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0422 19:30:06.292976 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file W0422 19:30:06.293590 1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0422 19:30:06.293653 1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0422 19:30:06.293746 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0422 19:30:06.293777 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0422 19:30:06.293980 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0422 19:30:06.294010 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0422 19:30:06.294044 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0422 19:30:06.294049 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0422 19:30:06.294546 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0422 19:30:06.294554 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0422 19:30:06.294644 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0422 19:30:06.294689 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0422 19:30:06.294748 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0422 19:30:06.294753 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0422 19:30:06.294786 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0422 19:30:06.294815 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0422 19:30:06.294822 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0422 19:30:06.294851 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0422 19:30:06.294800 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0422 19:30:06.294881 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0422 19:30:06.294834 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0422 19:30:06.294889 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0422 19:30:06.294858 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0422 19:30:06.294897 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0422 19:30:06.294864 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0422 19:30:06.294905 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0422 19:30:06.294586 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0422 19:30:06.295429 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0422 19:30:06.295453 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0422 19:30:06.295431 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0422 19:30:07.122285 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0422 19:30:07.122323 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0422 19:30:07.149297 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0422 19:30:07.149342 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0422 19:30:07.169031 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0422 19:30:07.169076 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0422 19:30:07.188267 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0422 19:30:07.188310 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0422 19:30:07.211296 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0422 19:30:07.211318 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0422 19:30:07.252730 1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0422 19:30:07.252770 1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0422 19:30:07.327525 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0422 19:30:07.327539 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0422 19:30:07.347757 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0422 19:30:07.347782 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0422 19:30:07.378180 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0422 19:30:07.378200 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0422 19:30:07.421014 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0422 19:30:07.421057 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope I0422 19:30:09.393640 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.751413 2099 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.751424 2099 policy_none.go:49] "None policy: Start" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.752407 2099 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.753178 2099 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.753229 2099 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.753241 2099 kubelet.go:2337] "Starting kubelet main sync loop" Apr 22 19:30:08 minikube kubelet[2099]: E0422 19:30:08.753260 2099 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.754103 2099 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.754125 2099 state_mem.go:35] "Initializing new in-memory state store" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.754226 2099 state_mem.go:75] "Updated machine memory state" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.754842 2099 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.754913 2099 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.754953 2099 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 22 19:30:08 minikube kubelet[2099]: E0422 19:30:08.757390 2099 iptables.go:577] "Could not set up iptables canary" err=< Apr 22 19:30:08 minikube kubelet[2099]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Apr 22 19:30:08 minikube kubelet[2099]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Apr 22 19:30:08 minikube kubelet[2099]: Perhaps ip6tables or your kernel needs to be upgraded. Apr 22 19:30:08 minikube kubelet[2099]: > table="nat" chain="KUBE-KUBELET-CANARY" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.840582 2099 kubelet_node_status.go:73] "Attempting to register node" node="minikube" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.848425 2099 kubelet_node_status.go:112] "Node was previously registered" node="minikube" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.848548 2099 kubelet_node_status.go:76] "Successfully registered node" node="minikube" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.854135 2099 topology_manager.go:215] "Topology Admit Handler" podUID="b8d7a7f7147a45e7aad28e8de9d9b488" podNamespace="kube-system" podName="etcd-minikube" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.854206 2099 topology_manager.go:215] "Topology Admit Handler" podUID="f682dce53dbd348e5087f718c1ea56b1" podNamespace="kube-system" podName="kube-apiserver-minikube" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.854225 2099 topology_manager.go:215] "Topology Admit Handler" podUID="b2f93c4e2ec4e1950aa41a71fd8273b0" podNamespace="kube-system" podName="kube-controller-manager-minikube" Apr 22 19:30:08 minikube kubelet[2099]: I0422 19:30:08.854242 2099 topology_manager.go:215] "Topology Admit Handler" podUID="f9c8e1d0d74b1727abdb4b4a31d3a7c1" podNamespace="kube-system" podName="kube-scheduler-minikube" Apr 22 19:30:08 minikube kubelet[2099]: E0422 19:30:08.859740 2099 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043832 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f682dce53dbd348e5087f718c1ea56b1-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"f682dce53dbd348e5087f718c1ea56b1\") " pod="kube-system/kube-apiserver-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043868 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2f93c4e2ec4e1950aa41a71fd8273b0-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"b2f93c4e2ec4e1950aa41a71fd8273b0\") " pod="kube-system/kube-controller-manager-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043883 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b2f93c4e2ec4e1950aa41a71fd8273b0-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"b2f93c4e2ec4e1950aa41a71fd8273b0\") " pod="kube-system/kube-controller-manager-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043896 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2f93c4e2ec4e1950aa41a71fd8273b0-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"b2f93c4e2ec4e1950aa41a71fd8273b0\") " pod="kube-system/kube-controller-manager-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043911 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2f93c4e2ec4e1950aa41a71fd8273b0-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"b2f93c4e2ec4e1950aa41a71fd8273b0\") " pod="kube-system/kube-controller-manager-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043924 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b8d7a7f7147a45e7aad28e8de9d9b488-etcd-certs\") pod \"etcd-minikube\" (UID: \"b8d7a7f7147a45e7aad28e8de9d9b488\") " pod="kube-system/etcd-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043933 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b8d7a7f7147a45e7aad28e8de9d9b488-etcd-data\") pod \"etcd-minikube\" (UID: \"b8d7a7f7147a45e7aad28e8de9d9b488\") " pod="kube-system/etcd-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043941 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f682dce53dbd348e5087f718c1ea56b1-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"f682dce53dbd348e5087f718c1ea56b1\") " pod="kube-system/kube-apiserver-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043951 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9c8e1d0d74b1727abdb4b4a31d3a7c1-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"f9c8e1d0d74b1727abdb4b4a31d3a7c1\") " pod="kube-system/kube-scheduler-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043960 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f682dce53dbd348e5087f718c1ea56b1-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"f682dce53dbd348e5087f718c1ea56b1\") " pod="kube-system/kube-apiserver-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.043968 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2f93c4e2ec4e1950aa41a71fd8273b0-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"b2f93c4e2ec4e1950aa41a71fd8273b0\") " pod="kube-system/kube-controller-manager-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.735390 2099 apiserver.go:52] "Watching apiserver" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.741785 2099 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 22 19:30:09 minikube kubelet[2099]: E0422 19:30:09.786331 2099 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.798912 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-minikube" podStartSLOduration=1.798900261 podStartE2EDuration="1.798900261s" podCreationTimestamp="2024-04-22 19:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-22 19:30:09.793525178 +0000 UTC m=+1.113813626" watchObservedRunningTime="2024-04-22 19:30:09.798900261 +0000 UTC m=+1.119188710" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.805064 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-minikube" podStartSLOduration=1.805056344 podStartE2EDuration="1.805056344s" podCreationTimestamp="2024-04-22 19:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-22 19:30:09.799301428 +0000 UTC m=+1.119589876" watchObservedRunningTime="2024-04-22 19:30:09.805056344 +0000 UTC m=+1.125344793" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.809548 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-minikube" podStartSLOduration=1.8095413439999999 podStartE2EDuration="1.809541344s" podCreationTimestamp="2024-04-22 19:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-22 19:30:09.805134678 +0000 UTC m=+1.125423126" watchObservedRunningTime="2024-04-22 19:30:09.809541344 +0000 UTC m=+1.129829751" Apr 22 19:30:09 minikube kubelet[2099]: I0422 19:30:09.814664 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-minikube" podStartSLOduration=2.814657219 podStartE2EDuration="2.814657219s" podCreationTimestamp="2024-04-22 19:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-22 19:30:09.809756844 +0000 UTC m=+1.130045293" watchObservedRunningTime="2024-04-22 19:30:09.814657219 +0000 UTC m=+1.134945918" Apr 22 19:30:10 minikube kubelet[2099]: I0422 19:30:10.282154 2099 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 22 19:30:21 minikube kubelet[2099]: I0422 19:30:21.653274 2099 topology_manager.go:215] "Topology Admit Handler" podUID="b2d286f7-a7fa-4f1f-9be6-6433b205afe4" podNamespace="kube-system" podName="storage-provisioner" Apr 22 19:30:21 minikube kubelet[2099]: I0422 19:30:21.841073 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b2d286f7-a7fa-4f1f-9be6-6433b205afe4-tmp\") pod \"storage-provisioner\" (UID: \"b2d286f7-a7fa-4f1f-9be6-6433b205afe4\") " pod="kube-system/storage-provisioner" Apr 22 19:30:21 minikube kubelet[2099]: I0422 19:30:21.841161 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcc7n\" (UniqueName: \"kubernetes.io/projected/b2d286f7-a7fa-4f1f-9be6-6433b205afe4-kube-api-access-qcc7n\") pod \"storage-provisioner\" (UID: \"b2d286f7-a7fa-4f1f-9be6-6433b205afe4\") " pod="kube-system/storage-provisioner" Apr 22 19:30:22 minikube kubelet[2099]: I0422 19:30:22.532358 2099 topology_manager.go:215] "Topology Admit Handler" podUID="3b807a60-4e22-4165-b9f8-d4a86507b097" podNamespace="kube-system" podName="kube-proxy-cfzjl" Apr 22 19:30:22 minikube kubelet[2099]: I0422 19:30:22.548185 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b807a60-4e22-4165-b9f8-d4a86507b097-xtables-lock\") pod \"kube-proxy-cfzjl\" (UID: \"3b807a60-4e22-4165-b9f8-d4a86507b097\") " pod="kube-system/kube-proxy-cfzjl" Apr 22 19:30:22 minikube kubelet[2099]: I0422 19:30:22.548210 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b807a60-4e22-4165-b9f8-d4a86507b097-lib-modules\") pod \"kube-proxy-cfzjl\" (UID: \"3b807a60-4e22-4165-b9f8-d4a86507b097\") " pod="kube-system/kube-proxy-cfzjl" Apr 22 19:30:22 minikube kubelet[2099]: I0422 19:30:22.548222 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mp4s\" (UniqueName: \"kubernetes.io/projected/3b807a60-4e22-4165-b9f8-d4a86507b097-kube-api-access-7mp4s\") pod \"kube-proxy-cfzjl\" (UID: \"3b807a60-4e22-4165-b9f8-d4a86507b097\") " pod="kube-system/kube-proxy-cfzjl" Apr 22 19:30:22 minikube kubelet[2099]: I0422 19:30:22.548233 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b807a60-4e22-4165-b9f8-d4a86507b097-kube-proxy\") pod \"kube-proxy-cfzjl\" (UID: \"3b807a60-4e22-4165-b9f8-d4a86507b097\") " pod="kube-system/kube-proxy-cfzjl" Apr 22 19:30:22 minikube kubelet[2099]: I0422 19:30:22.629199 2099 topology_manager.go:215] "Topology Admit Handler" podUID="57393eac-de1b-48e1-9758-ee594b361169" podNamespace="kube-system" podName="coredns-7db6d8ff4d-d6s8k" Apr 22 19:30:22 minikube kubelet[2099]: I0422 19:30:22.750179 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57393eac-de1b-48e1-9758-ee594b361169-config-volume\") pod \"coredns-7db6d8ff4d-d6s8k\" (UID: \"57393eac-de1b-48e1-9758-ee594b361169\") " pod="kube-system/coredns-7db6d8ff4d-d6s8k" Apr 22 19:30:22 minikube kubelet[2099]: I0422 19:30:22.750267 2099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp7b6\" (UniqueName: \"kubernetes.io/projected/57393eac-de1b-48e1-9758-ee594b361169-kube-api-access-hp7b6\") pod \"coredns-7db6d8ff4d-d6s8k\" (UID: \"57393eac-de1b-48e1-9758-ee594b361169\") " pod="kube-system/coredns-7db6d8ff4d-d6s8k" Apr 22 19:30:23 minikube kubelet[2099]: I0422 19:30:23.083659 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.083644768 podStartE2EDuration="14.083644768s" podCreationTimestamp="2024-04-22 19:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-22 19:30:22.851803782 +0000 UTC m=+14.172092189" watchObservedRunningTime="2024-04-22 19:30:23.083644768 +0000 UTC m=+14.403933218" Apr 22 19:30:23 minikube kubelet[2099]: I0422 19:30:23.870191 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-d6s8k" podStartSLOduration=1.8701793979999999 podStartE2EDuration="1.870179398s" podCreationTimestamp="2024-04-22 19:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-22 19:30:23.864275989 +0000 UTC m=+15.184564396" watchObservedRunningTime="2024-04-22 19:30:23.870179398 +0000 UTC m=+15.190467805" Apr 22 19:30:29 minikube kubelet[2099]: I0422 19:30:29.419643 2099 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Apr 22 19:30:29 minikube kubelet[2099]: I0422 19:30:29.420309 2099 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" ==> storage-provisioner [8c8572225f9b] <== I0422 19:30:22.160274 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...