* * ==> Audit <== * |----------------|-----------------|----------|-------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |----------------|-----------------|----------|-------|---------|---------------------|---------------------| | start | --driver=docker | minikube | amitk | v1.30.1 | 01 May 24 11:35 PDT | | | update-context | | minikube | amitk | v1.30.1 | 01 May 24 12:20 PDT | 01 May 24 12:20 PDT | | delete | | minikube | amitk | v1.30.1 | 01 May 24 12:24 PDT | 01 May 24 12:24 PDT | | start | --driver=docker | minikube | amitk | v1.30.1 | 01 May 24 12:25 PDT | | | update-context | | minikube | amitk | v1.30.1 | 01 May 24 12:31 PDT | 01 May 24 12:31 PDT | |----------------|-----------------|----------|-------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2024/05/01 12:25:01 Running on machine: amitk-WQCF9H5 Binary: Built with gc go1.20.2 for darwin/arm64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0501 12:25:01.067894 43227 out.go:296] Setting OutFile to fd 1 ... I0501 12:25:01.068074 43227 out.go:348] isatty.IsTerminal(1) = true I0501 12:25:01.068076 43227 out.go:309] Setting ErrFile to fd 2... I0501 12:25:01.068079 43227 out.go:348] isatty.IsTerminal(2) = true I0501 12:25:01.068147 43227 root.go:336] Updating PATH: /Users/amitk/.minikube/bin I0501 12:25:01.068550 43227 out.go:303] Setting JSON to false I0501 12:25:01.106181 43227 start.go:125] hostinfo: {"hostname":"amitk-WQCF9H5","uptime":354296,"bootTime":1714237205,"procs":786,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"1fee9720-78f6-5c28-ab23-e0aa99b229eb"} W0501 12:25:01.106260 43227 start.go:133] gopshost.Virtualization returned error: not implemented yet I0501 12:25:01.114048 43227 out.go:177] ๐Ÿ˜„ minikube v1.30.1 on Darwin 14.4.1 (arm64) I0501 12:25:01.121244 43227 notify.go:220] Checking for updates... I0501 12:25:01.121477 43227 driver.go:375] Setting default libvirt URI to qemu:///system I0501 12:25:01.181744 43227 docker.go:121] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265) I0501 12:25:01.181850 43227 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0501 12:25:01.954553 43227 info.go:266] docker info: {ID:4b9929f9-48f8-484b-ba14-5b6e93dadc56 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:79 SystemTime:2024-05-01 19:25:01.937844253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:13 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:10 MemTotal:16752779264 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/amitk/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/amitk/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/amitk/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/amitk/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/amitk/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/amitk/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/amitk/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/amitk/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/amitk/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/amitk/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/amitk/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/amitk/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:}} I0501 12:25:01.964548 43227 out.go:177] โœจ Using the docker driver based on user configuration I0501 12:25:01.967466 43227 start.go:295] selected driver: docker I0501 12:25:01.967470 43227 start.go:870] validating driver "docker" against I0501 12:25:01.967480 43227 start.go:881] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0501 12:25:01.967625 43227 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0501 12:25:02.078496 43227 info.go:266] docker info: {ID:4b9929f9-48f8-484b-ba14-5b6e93dadc56 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:79 SystemTime:2024-05-01 19:25:02.061719003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:13 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:10 MemTotal:16752779264 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/amitk/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/amitk/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/amitk/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/amitk/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/amitk/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/amitk/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/amitk/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/amitk/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/amitk/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/amitk/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/amitk/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/amitk/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:}} I0501 12:25:02.078622 43227 start_flags.go:305] no existing cluster config was found, will generate one from the flags I0501 12:25:02.086205 43227 start_flags.go:386] Using suggested 8100MB memory alloc based on sys=32768MB, container=15976MB I0501 12:25:02.086300 43227 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true] I0501 12:25:02.091180 43227 out.go:177] ๐Ÿ“Œ Using Docker Desktop driver with root privileges I0501 12:25:02.094175 43227 cni.go:84] Creating CNI manager for "" I0501 12:25:02.094191 43227 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0501 12:25:02.094195 43227 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0501 12:25:02.094198 43227 start_flags.go:319] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0501 12:25:02.103095 43227 out.go:177] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0501 12:25:02.107157 43227 cache.go:120] Beginning downloading kic base image for docker with docker I0501 12:25:02.112190 43227 out.go:177] ๐Ÿšœ Pulling base image ... I0501 12:25:02.120103 43227 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 in local docker daemon I0501 12:25:02.120106 43227 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 12:25:02.120138 43227 preload.go:148] Found local preload: /Users/amitk/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-arm64.tar.lz4 I0501 12:25:02.120144 43227 cache.go:57] Caching tarball of preloaded images I0501 12:25:02.120263 43227 preload.go:174] Found /Users/amitk/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download I0501 12:25:02.120280 43227 cache.go:60] Finished verifying existence of preloaded tar for v1.26.3 on docker I0501 12:25:02.120982 43227 profile.go:148] Saving config to /Users/amitk/.minikube/profiles/minikube/config.json ... I0501 12:25:02.121010 43227 lock.go:35] WriteFile acquiring /Users/amitk/.minikube/profiles/minikube/config.json: {Name:mkd187eb1e4d4d5190e0e6fa076d3a60d4f80826 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 12:25:02.184354 43227 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 to local cache I0501 12:25:02.184506 43227 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 in local cache directory I0501 12:25:02.184527 43227 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 in local cache directory, skipping pull I0501 12:25:02.184528 43227 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 exists in cache, skipping pull I0501 12:25:02.184537 43227 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 as a tarball I0501 12:25:02.184539 43227 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 from local cache I0501 12:25:04.188579 43227 cache.go:163] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 from cached tarball I0501 12:25:04.188623 43227 cache.go:193] Successfully downloaded all kic artifacts I0501 12:25:04.188663 43227 start.go:364] acquiring machines lock for minikube: {Name:mkb2cf209b96c291f7cf613d78a0b466cfba4809 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0501 12:25:04.188806 43227 start.go:368] acquired machines lock for "minikube" in 135.75ยตs I0501 12:25:04.188826 43227 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0501 12:25:04.188894 43227 start.go:125] createHost starting for "" (driver="docker") I0501 12:25:04.194215 43227 out.go:204] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=8100MB) ... I0501 12:25:04.194410 43227 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0501 12:25:04.194430 43227 client.go:168] LocalClient.Create starting I0501 12:25:04.194620 43227 main.go:141] libmachine: Reading certificate data from /Users/amitk/.minikube/certs/ca.pem I0501 12:25:04.194855 43227 main.go:141] libmachine: Decoding PEM data... I0501 12:25:04.194869 43227 main.go:141] libmachine: Parsing certificate... I0501 12:25:04.194979 43227 main.go:141] libmachine: Reading certificate data from /Users/amitk/.minikube/certs/cert.pem I0501 12:25:04.195182 43227 main.go:141] libmachine: Decoding PEM data... I0501 12:25:04.195199 43227 main.go:141] libmachine: Parsing certificate... I0501 12:25:04.197626 43227 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0501 12:25:04.246170 43227 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0501 12:25:04.246251 43227 network_create.go:281] running [docker network inspect minikube] to gather additional debugging logs... I0501 12:25:04.246260 43227 cli_runner.go:164] Run: docker network inspect minikube W0501 12:25:04.292854 43227 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0501 12:25:04.292870 43227 network_create.go:284] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error response from daemon: network minikube not found I0501 12:25:04.292885 43227 network_create.go:286] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error response from daemon: network minikube not found ** /stderr ** I0501 12:25:04.292950 43227 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0501 12:25:04.336914 43227 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x14000e9e6e0} I0501 12:25:04.336945 43227 network_create.go:123] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ... I0501 12:25:04.337023 43227 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0501 12:25:04.395038 43227 network_create.go:107] docker network minikube 192.168.49.0/24 created I0501 12:25:04.395061 43227 kic.go:117] calculated static IP "192.168.49.2" for the "minikube" container I0501 12:25:04.395172 43227 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0501 12:25:04.439447 43227 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0501 12:25:04.482439 43227 oci.go:103] Successfully created a docker volume minikube I0501 12:25:04.482551 43227 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 -d /var/lib I0501 12:25:04.973333 43227 oci.go:107] Successfully prepared a docker volume minikube I0501 12:25:04.973365 43227 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 12:25:04.973377 43227 kic.go:190] Starting extracting preloaded images to volume ... I0501 12:25:04.973530 43227 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/amitk/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 -I lz4 -xf /preloaded.tar -C /extractDir I0501 12:25:06.402176 43227 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/amitk/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 -I lz4 -xf /preloaded.tar -C /extractDir: (1.428576167s) I0501 12:25:06.402205 43227 kic.go:199] duration metric: took 1.428834 seconds to extract preloaded images to volume I0501 12:25:06.402295 43227 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0501 12:25:06.502087 43227 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=8100mb --memory-swap=8100mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 I0501 12:25:06.705293 43227 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0501 12:25:06.754039 43227 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0501 12:25:06.802722 43227 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0501 12:25:06.902644 43227 oci.go:144] the created container "minikube" has a running status. I0501 12:25:06.902675 43227 kic.go:221] Creating ssh key for kic: /Users/amitk/.minikube/machines/minikube/id_rsa... I0501 12:25:07.037973 43227 kic_runner.go:191] docker (temp): /Users/amitk/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0501 12:25:07.098806 43227 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0501 12:25:07.147493 43227 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0501 12:25:07.147506 43227 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0501 12:25:07.235843 43227 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0501 12:25:07.282622 43227 machine.go:88] provisioning docker machine ... I0501 12:25:07.282658 43227 ubuntu.go:169] provisioning hostname "minikube" I0501 12:25:07.282765 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 12:25:07.332013 43227 main.go:141] libmachine: Using SSH client type: native I0501 12:25:07.332353 43227 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x10071f560] 0x100721f40 [] 0s} 127.0.0.1 54559 } I0501 12:25:07.332361 43227 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0501 12:25:07.333359 43227 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF I0501 12:25:10.736805 43227 main.go:141] libmachine: SSH cmd err, output: : minikube I0501 12:25:10.736920 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 12:25:10.794165 43227 main.go:141] libmachine: Using SSH client type: native I0501 12:25:10.794516 43227 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x10071f560] 0x100721f40 [] 0s} 127.0.0.1 54559 } I0501 12:25:10.794523 43227 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0501 12:25:11.076180 43227 main.go:141] libmachine: SSH cmd err, output: : I0501 12:25:11.076208 43227 ubuntu.go:175] set auth options {CertDir:/Users/amitk/.minikube CaCertPath:/Users/amitk/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/amitk/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/amitk/.minikube/machines/server.pem ServerKeyPath:/Users/amitk/.minikube/machines/server-key.pem ClientKeyPath:/Users/amitk/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/amitk/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/amitk/.minikube} I0501 12:25:11.076245 43227 ubuntu.go:177] setting up certificates I0501 12:25:11.076259 43227 provision.go:83] configureAuth start I0501 12:25:11.076432 43227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0501 12:25:11.127592 43227 provision.go:138] copyHostCerts I0501 12:25:11.127743 43227 exec_runner.go:144] found /Users/amitk/.minikube/ca.pem, removing ... I0501 12:25:11.127747 43227 exec_runner.go:207] rm: /Users/amitk/.minikube/ca.pem I0501 12:25:11.128253 43227 exec_runner.go:151] cp: /Users/amitk/.minikube/certs/ca.pem --> /Users/amitk/.minikube/ca.pem (1074 bytes) I0501 12:25:11.129623 43227 exec_runner.go:144] found /Users/amitk/.minikube/cert.pem, removing ... I0501 12:25:11.129626 43227 exec_runner.go:207] rm: /Users/amitk/.minikube/cert.pem I0501 12:25:11.129923 43227 exec_runner.go:151] cp: /Users/amitk/.minikube/certs/cert.pem --> /Users/amitk/.minikube/cert.pem (1119 bytes) I0501 12:25:11.130602 43227 exec_runner.go:144] found /Users/amitk/.minikube/key.pem, removing ... I0501 12:25:11.130604 43227 exec_runner.go:207] rm: /Users/amitk/.minikube/key.pem I0501 12:25:11.130845 43227 exec_runner.go:151] cp: /Users/amitk/.minikube/certs/key.pem --> /Users/amitk/.minikube/key.pem (1675 bytes) I0501 12:25:11.131176 43227 provision.go:112] generating server cert: /Users/amitk/.minikube/machines/server.pem ca-key=/Users/amitk/.minikube/certs/ca.pem private-key=/Users/amitk/.minikube/certs/ca-key.pem org=amitk.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0501 12:25:11.215354 43227 provision.go:172] copyRemoteCerts I0501 12:25:11.215420 43227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0501 12:25:11.215453 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 12:25:11.266661 43227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54559 SSHKeyPath:/Users/amitk/.minikube/machines/minikube/id_rsa Username:docker} I0501 12:25:11.511969 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes) I0501 12:25:11.707108 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes) I0501 12:25:11.909254 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0501 12:25:12.110024 43227 provision.go:86] duration metric: configureAuth took 1.028880416s I0501 12:25:12.110054 43227 ubuntu.go:193] setting minikube options for container-runtime I0501 12:25:12.110798 43227 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 12:25:12.112542 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 12:25:12.165416 43227 main.go:141] libmachine: Using SSH client type: native I0501 12:25:12.165817 43227 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x10071f560] 0x100721f40 [] 0s} 127.0.0.1 54559 } I0501 12:25:12.165825 43227 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0501 12:25:12.450681 43227 main.go:141] libmachine: SSH cmd err, output: : overlay I0501 12:25:12.450695 43227 ubuntu.go:71] root file system type: overlay I0501 12:25:12.450878 43227 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0501 12:25:12.451138 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 12:25:12.509200 43227 main.go:141] libmachine: Using SSH client type: native I0501 12:25:12.509545 43227 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x10071f560] 0x100721f40 [] 0s} 127.0.0.1 54559 } I0501 12:25:12.509584 43227 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0501 12:25:12.881029 43227 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0501 12:25:12.881287 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 12:25:12.940611 43227 main.go:141] libmachine: Using SSH client type: native I0501 12:25:12.941013 43227 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x10071f560] 0x100721f40 [] 0s} 127.0.0.1 54559 } I0501 12:25:12.941022 43227 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0501 12:25:16.449626 43227 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2023-03-27 16:16:18.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2024-05-01 19:25:12.872996008 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0501 12:25:16.449693 43227 machine.go:91] provisioned docker machine in 9.167097583s I0501 12:25:16.449702 43227 client.go:171] LocalClient.Create took 12.255328875s I0501 12:25:16.449730 43227 start.go:167] duration metric: libmachine.API.Create for "minikube" took 12.255377541s I0501 12:25:16.449737 43227 start.go:300] post-start starting for "minikube" (driver="docker") I0501 12:25:16.449742 43227 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0501 12:25:16.449919 43227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0501 12:25:16.449996 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 12:25:16.503555 43227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54559 SSHKeyPath:/Users/amitk/.minikube/machines/minikube/id_rsa Username:docker} I0501 12:25:16.733973 43227 ssh_runner.go:195] Run: cat /etc/os-release I0501 12:25:16.766906 43227 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0501 12:25:16.766941 43227 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0501 12:25:16.766993 43227 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0501 12:25:16.767000 43227 info.go:137] Remote host: Ubuntu 20.04.5 LTS I0501 12:25:16.767010 43227 filesync.go:126] Scanning /Users/amitk/.minikube/addons for local assets ... I0501 12:25:16.767382 43227 filesync.go:126] Scanning /Users/amitk/.minikube/files for local assets ... I0501 12:25:16.767623 43227 start.go:303] post-start completed in 317.882333ms I0501 12:25:16.768816 43227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0501 12:25:16.813249 43227 profile.go:148] Saving config to /Users/amitk/.minikube/profiles/minikube/config.json ... I0501 12:25:16.814175 43227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0501 12:25:16.814231 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 12:25:16.863559 43227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54559 SSHKeyPath:/Users/amitk/.minikube/machines/minikube/id_rsa Username:docker} I0501 12:25:17.055117 43227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0501 12:25:17.105684 43227 start.go:128] duration metric: createHost completed in 12.916838s I0501 12:25:17.105707 43227 start.go:83] releasing machines lock for "minikube", held for 12.916957208s I0501 12:25:17.106168 43227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0501 12:25:17.160548 43227 ssh_runner.go:195] Run: cat /version.json I0501 12:25:17.160612 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 12:25:17.160744 43227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0501 12:25:17.160849 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 12:25:17.207395 43227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54559 SSHKeyPath:/Users/amitk/.minikube/machines/minikube/id_rsa Username:docker} I0501 12:25:17.210132 43227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54559 SSHKeyPath:/Users/amitk/.minikube/machines/minikube/id_rsa Username:docker} I0501 12:25:17.383085 43227 ssh_runner.go:195] Run: systemctl --version I0501 12:25:17.530217 43227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0501 12:25:17.582314 43227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0501 12:25:17.834009 43227 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0501 12:25:17.834125 43227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0501 12:25:17.986173 43227 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0501 12:25:17.986199 43227 start.go:481] detecting cgroup driver to use... I0501 12:25:17.986219 43227 detect.go:196] detected "cgroupfs" cgroup driver on host os I0501 12:25:17.986461 43227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0501 12:25:18.136769 43227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0501 12:25:18.234759 43227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0501 12:25:18.333933 43227 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver... I0501 12:25:18.334150 43227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml" I0501 12:25:18.430891 43227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0501 12:25:18.523389 43227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0501 12:25:18.614855 43227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0501 12:25:18.707783 43227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0501 12:25:18.793222 43227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0501 12:25:18.888101 43227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0501 12:25:18.962070 43227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0501 12:25:19.037345 43227 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 12:25:19.351721 43227 ssh_runner.go:195] Run: sudo systemctl restart containerd I0501 12:25:19.663235 43227 start.go:481] detecting cgroup driver to use... I0501 12:25:19.663257 43227 detect.go:196] detected "cgroupfs" cgroup driver on host os I0501 12:25:19.663445 43227 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0501 12:25:19.768248 43227 cruntime.go:276] skipping containerd shutdown because we are bound to it I0501 12:25:19.768581 43227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0501 12:25:19.868475 43227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0501 12:25:20.042375 43227 ssh_runner.go:195] Run: which cri-dockerd I0501 12:25:20.083403 43227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0501 12:25:20.162228 43227 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0501 12:25:20.310071 43227 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0501 12:25:20.755251 43227 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0501 12:25:21.191498 43227 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver... I0501 12:25:21.191521 43227 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes) I0501 12:25:21.337366 43227 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 12:25:21.767829 43227 ssh_runner.go:195] Run: sudo systemctl restart docker I0501 12:25:23.644392 43227 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.876556875s) I0501 12:25:23.644489 43227 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0501 12:25:24.031473 43227 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0501 12:25:24.348955 43227 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0501 12:25:24.742788 43227 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 12:25:25.098405 43227 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0501 12:25:25.202211 43227 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 12:25:25.542539 43227 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0501 12:25:26.018540 43227 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock I0501 12:25:26.018768 43227 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0501 12:25:26.060301 43227 start.go:549] Will wait 60s for crictl version I0501 12:25:26.060482 43227 ssh_runner.go:195] Run: which crictl I0501 12:25:26.096866 43227 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0501 12:25:26.340922 43227 start.go:565] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 23.0.2 RuntimeApiVersion: v1alpha2 I0501 12:25:26.341285 43227 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0501 12:25:26.522125 43227 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0501 12:25:26.714501 43227 out.go:204] ๐Ÿณ Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... I0501 12:25:26.714701 43227 cli_runner.go:164] Run: docker exec -t minikube dig +short host.docker.internal I0501 12:25:26.848881 43227 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254 I0501 12:25:26.848992 43227 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts I0501 12:25:26.886764 43227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0501 12:25:27.006126 43227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0501 12:25:27.064918 43227 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 12:25:27.064983 43227 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0501 12:25:27.198219 43227 docker.go:639] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.26.3 registry.k8s.io/kube-controller-manager:v1.26.3 registry.k8s.io/kube-proxy:v1.26.3 registry.k8s.io/kube-scheduler:v1.26.3 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/pause:3.9 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0501 12:25:27.198246 43227 docker.go:569] Images already preloaded, skipping extraction I0501 12:25:27.198325 43227 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0501 12:25:27.324384 43227 docker.go:639] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.26.3 registry.k8s.io/kube-proxy:v1.26.3 registry.k8s.io/kube-controller-manager:v1.26.3 registry.k8s.io/kube-scheduler:v1.26.3 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/pause:3.9 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0501 12:25:27.324406 43227 cache_images.go:84] Images are preloaded, skipping loading I0501 12:25:27.324596 43227 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0501 12:25:27.503393 43227 cni.go:84] Creating CNI manager for "" I0501 12:25:27.503407 43227 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0501 12:25:27.503422 43227 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0501 12:25:27.503447 43227 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]} I0501 12:25:27.503635 43227 kubeadm.go:177] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.26.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0501 12:25:27.503760 43227 kubeadm.go:968] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0501 12:25:27.503921 43227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3 I0501 12:25:27.580895 43227 binaries.go:44] Found k8s binaries, skipping transfer I0501 12:25:27.581228 43227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0501 12:25:27.656375 43227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes) I0501 12:25:27.795910 43227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0501 12:25:27.938122 43227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes) I0501 12:25:28.078584 43227 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0501 12:25:28.117222 43227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0501 12:25:28.232672 43227 certs.go:56] Setting up /Users/amitk/.minikube/profiles/minikube for IP: 192.168.49.2 I0501 12:25:28.232682 43227 certs.go:186] acquiring lock for shared ca certs: {Name:mk42216b6c1284fcc293435f999bdfb57c15083d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 12:25:28.234255 43227 certs.go:195] skipping minikubeCA CA generation: /Users/amitk/.minikube/ca.key I0501 12:25:28.234480 43227 certs.go:195] skipping proxyClientCA CA generation: /Users/amitk/.minikube/proxy-client-ca.key I0501 12:25:28.234518 43227 certs.go:315] generating minikube-user signed cert: /Users/amitk/.minikube/profiles/minikube/client.key I0501 12:25:28.234525 43227 crypto.go:68] Generating cert /Users/amitk/.minikube/profiles/minikube/client.crt with IP's: [] I0501 12:25:28.358502 43227 crypto.go:156] Writing cert to /Users/amitk/.minikube/profiles/minikube/client.crt ... I0501 12:25:28.358509 43227 lock.go:35] WriteFile acquiring /Users/amitk/.minikube/profiles/minikube/client.crt: {Name:mk4708e3a8eb0198ad138b9d0e45c3b87e4180e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 12:25:28.358877 43227 crypto.go:164] Writing key to /Users/amitk/.minikube/profiles/minikube/client.key ... I0501 12:25:28.358879 43227 lock.go:35] WriteFile acquiring /Users/amitk/.minikube/profiles/minikube/client.key: {Name:mkab2a585f9943846a65b1ae074eefc21975e79d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 12:25:28.359041 43227 certs.go:315] generating minikube signed cert: /Users/amitk/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0501 12:25:28.359048 43227 crypto.go:68] Generating cert /Users/amitk/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0501 12:25:28.508935 43227 crypto.go:156] Writing cert to /Users/amitk/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0501 12:25:28.508943 43227 lock.go:35] WriteFile acquiring /Users/amitk/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk7d2ec5cdd67f7f10f9096c9c0b352972bc81b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 12:25:28.509312 43227 crypto.go:164] Writing key to /Users/amitk/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0501 12:25:28.509314 43227 lock.go:35] WriteFile acquiring /Users/amitk/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk50d93507b90daf5a0b9bb9c8abe2354cfdc48b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 12:25:28.509476 43227 certs.go:333] copying /Users/amitk/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/amitk/.minikube/profiles/minikube/apiserver.crt I0501 12:25:28.509724 43227 certs.go:337] copying /Users/amitk/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/amitk/.minikube/profiles/minikube/apiserver.key I0501 12:25:28.509869 43227 certs.go:315] generating aggregator signed cert: /Users/amitk/.minikube/profiles/minikube/proxy-client.key I0501 12:25:28.509880 43227 crypto.go:68] Generating cert /Users/amitk/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0501 12:25:28.660253 43227 crypto.go:156] Writing cert to /Users/amitk/.minikube/profiles/minikube/proxy-client.crt ... I0501 12:25:28.660259 43227 lock.go:35] WriteFile acquiring /Users/amitk/.minikube/profiles/minikube/proxy-client.crt: {Name:mk5fa6bc30f2308b8941cf8a39b727c57183b199 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 12:25:28.660540 43227 crypto.go:164] Writing key to /Users/amitk/.minikube/profiles/minikube/proxy-client.key ... I0501 12:25:28.660542 43227 lock.go:35] WriteFile acquiring /Users/amitk/.minikube/profiles/minikube/proxy-client.key: {Name:mk6d170a18efda209722c92fb570391ec774eb4c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 12:25:28.660881 43227 certs.go:401] found cert: /Users/amitk/.minikube/certs/Users/amitk/.minikube/certs/ca-key.pem (1675 bytes) I0501 12:25:28.660919 43227 certs.go:401] found cert: /Users/amitk/.minikube/certs/Users/amitk/.minikube/certs/ca.pem (1074 bytes) I0501 12:25:28.660952 43227 certs.go:401] found cert: /Users/amitk/.minikube/certs/Users/amitk/.minikube/certs/cert.pem (1119 bytes) I0501 12:25:28.660983 43227 certs.go:401] found cert: /Users/amitk/.minikube/certs/Users/amitk/.minikube/certs/key.pem (1675 bytes) I0501 12:25:28.661311 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0501 12:25:28.908190 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0501 12:25:29.103027 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0501 12:25:29.297144 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0501 12:25:29.493778 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0501 12:25:29.689677 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0501 12:25:29.884605 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0501 12:25:30.078886 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0501 12:25:30.275069 43227 ssh_runner.go:362] scp /Users/amitk/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0501 12:25:30.470703 43227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0501 12:25:30.611052 43227 ssh_runner.go:195] Run: openssl version I0501 12:25:30.650194 43227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0501 12:25:30.738946 43227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0501 12:25:30.778493 43227 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 May 1 18:36 /usr/share/ca-certificates/minikubeCA.pem I0501 12:25:30.778639 43227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0501 12:25:30.821093 43227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0501 12:25:30.910871 43227 kubeadm.go:401] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0501 12:25:30.911079 43227 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0501 12:25:31.038761 43227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0501 12:25:31.116276 43227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0501 12:25:31.194173 43227 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver I0501 12:25:31.194371 43227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0501 12:25:31.271496 43227 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0501 12:25:31.271574 43227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0501 12:25:31.382887 43227 kubeadm.go:322] W0501 19:25:31.380704 1479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! I0501 12:25:31.652232 43227 kubeadm.go:322] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet I0501 12:25:31.733801 43227 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0501 12:25:31.734074 43227 kubeadm.go:322] error execution phase preflight: [preflight] Some fatal errors occurred: I0501 12:25:31.734236 43227 kubeadm.go:322] [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH I0501 12:25:31.734383 43227 kubeadm.go:322] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` I0501 12:25:31.734469 43227 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher I0501 12:25:31.737340 43227 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3 I0501 12:25:31.737392 43227 kubeadm.go:322] [preflight] Running pre-flight checks W0501 12:25:31.737516 43227 out.go:239] ๐Ÿ’ข initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.26.3 [preflight] Running pre-flight checks stderr: W0501 19:25:31.380704 1479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher I0501 12:25:31.738176 43227 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force" I0501 12:25:47.567630 43227 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (15.829503584s) I0501 12:25:47.567832 43227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0501 12:25:47.656748 43227 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver I0501 12:25:47.657008 43227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0501 12:25:47.734955 43227 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0501 12:25:47.735030 43227 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0501 12:25:47.871140 43227 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3 I0501 12:25:47.871200 43227 kubeadm.go:322] [preflight] Running pre-flight checks I0501 12:25:48.181612 43227 kubeadm.go:322] W0501 19:25:47.840842 3473 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! I0501 12:25:48.181898 43227 kubeadm.go:322] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet I0501 12:25:48.182230 43227 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0501 12:25:48.182444 43227 kubeadm.go:322] error execution phase preflight: [preflight] Some fatal errors occurred: I0501 12:25:48.182819 43227 kubeadm.go:322] [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH I0501 12:25:48.183092 43227 kubeadm.go:322] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` I0501 12:25:48.183201 43227 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher I0501 12:25:48.183251 43227 kubeadm.go:403] StartCluster complete in 17.272475542s I0501 12:25:48.183286 43227 cri.go:52] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]} I0501 12:25:48.183482 43227 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0501 12:25:48.378644 43227 cri.go:87] found id: "" I0501 12:25:48.378659 43227 logs.go:277] 0 containers: [] W0501 12:25:48.378667 43227 logs.go:279] No container was found matching "kube-apiserver" I0501 12:25:48.378673 43227 cri.go:52] listing CRI containers in root : {State:all Name:etcd Namespaces:[]} I0501 12:25:48.378852 43227 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd I0501 12:25:48.568236 43227 cri.go:87] found id: "" I0501 12:25:48.568256 43227 logs.go:277] 0 containers: [] W0501 12:25:48.568269 43227 logs.go:279] No container was found matching "etcd" I0501 12:25:48.568278 43227 cri.go:52] listing CRI containers in root : {State:all Name:coredns Namespaces:[]} I0501 12:25:48.568680 43227 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns I0501 12:25:48.755203 43227 cri.go:87] found id: "" I0501 12:25:48.755222 43227 logs.go:277] 0 containers: [] W0501 12:25:48.755235 43227 logs.go:279] No container was found matching "coredns" I0501 12:25:48.755244 43227 cri.go:52] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]} I0501 12:25:48.755671 43227 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0501 12:25:48.942389 43227 cri.go:87] found id: "" I0501 12:25:48.942409 43227 logs.go:277] 0 containers: [] W0501 12:25:48.942423 43227 logs.go:279] No container was found matching "kube-scheduler" I0501 12:25:48.942432 43227 cri.go:52] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]} I0501 12:25:48.942689 43227 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy I0501 12:25:49.127886 43227 cri.go:87] found id: "" I0501 12:25:49.127901 43227 logs.go:277] 0 containers: [] W0501 12:25:49.127909 43227 logs.go:279] No container was found matching "kube-proxy" I0501 12:25:49.127915 43227 cri.go:52] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]} I0501 12:25:49.128101 43227 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0501 12:25:49.314275 43227 cri.go:87] found id: "" I0501 12:25:49.314294 43227 logs.go:277] 0 containers: [] W0501 12:25:49.314307 43227 logs.go:279] No container was found matching "kube-controller-manager" I0501 12:25:49.314316 43227 cri.go:52] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]} I0501 12:25:49.314596 43227 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet I0501 12:25:49.506499 43227 cri.go:87] found id: "" I0501 12:25:49.506514 43227 logs.go:277] 0 containers: [] W0501 12:25:49.506523 43227 logs.go:279] No container was found matching "kindnet" I0501 12:25:49.506533 43227 logs.go:123] Gathering logs for dmesg ... I0501 12:25:49.506545 43227 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0501 12:25:49.611307 43227 logs.go:123] Gathering logs for describe nodes ... I0501 12:25:49.611595 43227 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0501 12:25:49.731682 43227 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: E0501 19:25:49.718404 3625 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0501 19:25:49.718636 3625 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0501 19:25:49.719723 3625 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0501 19:25:49.721756 3625 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0501 19:25:49.723825 3625 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** E0501 19:25:49.718404 3625 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0501 19:25:49.718636 3625 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0501 19:25:49.719723 3625 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0501 19:25:49.721756 3625 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0501 19:25:49.723825 3625 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** I0501 12:25:49.731736 43227 logs.go:123] Gathering logs for Docker ... I0501 12:25:49.731751 43227 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400" I0501 12:25:49.865473 43227 logs.go:123] Gathering logs for container status ... I0501 12:25:49.865482 43227 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0501 12:25:50.093636 43227 logs.go:123] Gathering logs for kubelet ... I0501 12:25:50.093661 43227 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0501 12:25:50.203184 43227 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.26.3 [preflight] Running pre-flight checks stderr: W0501 19:25:47.840842 3473 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher W0501 12:25:50.203202 43227 out.go:239] W0501 12:25:50.203410 43227 out.go:239] ๐Ÿ’ฃ Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.26.3 [preflight] Running pre-flight checks stderr: W0501 19:25:47.840842 3473 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher W0501 12:25:50.203849 43227 out.go:239] W0501 12:25:50.205830 43227 out.go:239] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚ โ”‚ โ”‚ โ”‚ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ I0501 12:25:50.222074 43227 out.go:177] W0501 12:25:50.229097 43227 out.go:239] โŒ Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.26.3 [preflight] Running pre-flight checks stderr: W0501 19:25:47.840842 3473 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher W0501 12:25:50.229658 43227 out.go:239] W0501 12:25:50.231563 43227 out.go:239] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚ โ”‚ โ”‚ โ”‚ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ I0501 12:25:50.246937 43227 out.go:177] * * ==> Docker <== * -- Logs begin at Wed 2024-05-01 19:25:07 UTC, end at Wed 2024-05-01 19:34:15 UTC. -- May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"format\\\"\". Proceed without further sandbox information." May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Failed to delete corrupt checkpoint for sandbox format\": invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"format\\\"\". Proceed without further sandbox information." May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Failed to delete corrupt checkpoint for sandbox format\": invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"format\\\"\". Proceed without further sandbox information." May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" May 01 19:25:46 minikube cri-dockerd[1186]: time="2024-05-01T19:25:46Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" May 01 19:25:47 minikube cri-dockerd[1186]: time="2024-05-01T19:25:47Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" May 01 19:25:47 minikube cri-dockerd[1186]: time="2024-05-01T19:25:47Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" May 01 19:25:47 minikube cri-dockerd[1186]: time="2024-05-01T19:25:47Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" May 01 19:25:47 minikube cri-dockerd[1186]: time="2024-05-01T19:25:47Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" May 01 19:25:47 minikube cri-dockerd[1186]: time="2024-05-01T19:25:47Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * * ==> describe nodes <== * * ==> dmesg <== * [May 1 18:21] cacheinfo: Unable to detect cache hierarchy for CPU 0 [ +0.205227] netlink: 'init': attribute type 4 has an invalid length. [ +0.013285] fakeowner: loading out-of-tree module taints kernel. [ +1.230048] netlink: 'init': attribute type 22 has an invalid length. [May 1 18:23] systemd[1053]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set * * ==> kernel <== * 19:34:16 up 1:12, 0 users, load average: 0.66, 0.60, 0.80 Linux minikube 6.6.22-linuxkit #1 SMP Fri Mar 29 12:21:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.5 LTS" * * ==> kubelet <== * -- Logs begin at Wed 2024-05-01 19:25:07 UTC, end at Wed 2024-05-01 19:34:16 UTC. -- -- No entries --