* * ==> Audit <== * |---------|------------------------------------|----------|---------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------------------------------------|----------|---------|---------|---------------------|---------------------| | start | | minikube | vagrant | v1.32.0 | 12 Apr 24 01:55 UTC | 12 Apr 24 01:56 UTC | | addons | enable ingress | minikube | vagrant | v1.32.0 | 12 Apr 24 04:46 UTC | | | stop | | minikube | vagrant | v1.32.0 | 12 Apr 24 05:42 UTC | 12 Apr 24 05:42 UTC | | start | | minikube | vagrant | v1.32.0 | 13 Apr 24 03:29 UTC | 13 Apr 24 03:31 UTC | | addons | enable ingress | minikube | vagrant | v1.32.0 | 13 Apr 24 03:33 UTC | | | addons | enable ingress | minikube | vagrant | v1.32.0 | 13 Apr 24 03:50 UTC | | | delete | --all | minikube | vagrant | v1.32.0 | 15 Apr 24 02:25 UTC | | | delete | --all | minikube | vagrant | v1.32.0 | 15 Apr 24 05:39 UTC | | | delete | --all | minikube | vagrant | v1.32.0 | 15 Apr 24 06:40 UTC | 15 Apr 24 06:40 UTC | | start | | minikube | vagrant | v1.32.0 | 15 Apr 24 06:41 UTC | 15 Apr 24 06:42 UTC | | start | | minikube | vagrant | v1.32.0 | 15 Apr 24 06:52 UTC | 15 Apr 24 06:58 UTC | | addons | enable ingress | minikube | vagrant | v1.32.0 | 15 Apr 24 07:27 UTC | 15 Apr 24 07:30 UTC | | start | | minikube | vagrant | v1.32.0 | 17 Apr 24 03:13 UTC | 17 Apr 24 03:24 UTC | | stop | | minikube | vagrant | v1.32.0 | 17 Apr 24 03:53 UTC | 17 Apr 24 03:57 UTC | | start | | minikube | vagrant | v1.32.0 | 17 Apr 24 03:57 UTC | 17 Apr 24 04:04 UTC | | ip | | minikube | vagrant | v1.32.0 | 17 Apr 24 04:05 UTC | 17 Apr 24 04:05 UTC | | start | | minikube | vagrant | v1.32.0 | 17 Apr 24 19:05 UTC | | | start | | minikube | vagrant | v1.32.0 | 17 Apr 24 19:30 UTC | 17 Apr 24 19:32 UTC | | start | | minikube | vagrant | v1.32.0 | 17 Apr 24 21:31 UTC | 17 Apr 24 21:51 UTC | | start | | minikube | vagrant | v1.32.0 | 19 Apr 24 03:53 UTC | 19 Apr 24 03:59 UTC | | stop | | minikube | vagrant | v1.32.0 | 19 Apr 24 04:00 UTC | 19 Apr 24 04:01 UTC | | start | | minikube | vagrant | v1.32.0 | 20 Apr 24 01:58 UTC | 20 Apr 24 02:00 UTC | | start | | minikube | vagrant | v1.32.0 | 20 Apr 24 03:09 UTC | | | start | --v=7 | minikube | vagrant | v1.32.0 | 20 Apr 24 03:29 UTC | | | delete | | minikube | vagrant | v1.32.0 | 20 Apr 24 03:42 UTC | | | delete | | minikube | vagrant | v1.32.0 | 20 Apr 24 15:18 UTC | 20 Apr 24 15:19 UTC | | start | | minikube | vagrant | v1.32.0 | 20 Apr 24 15:19 UTC | 20 Apr 24 15:24 UTC | | start | | minikube | vagrant | v1.32.0 | 20 Apr 24 15:42 UTC | 20 Apr 24 16:00 UTC | | ip | | minikube | vagrant | v1.32.0 | 20 Apr 24 16:08 UTC | 20 Apr 24 16:09 UTC | | ip | | minikube | vagrant | v1.32.0 | 20 Apr 24 16:10 UTC | 20 Apr 24 16:10 UTC | | service | webapp-service | minikube | vagrant | v1.32.0 | 20 Apr 24 16:15 UTC | | | service | list | minikube | vagrant | v1.32.0 | 20 Apr 24 16:15 UTC | 20 Apr 24 16:15 UTC | | service | list | minikube | vagrant | v1.32.0 | 20 Apr 24 16:16 UTC | 20 Apr 24 16:16 UTC | | ip | | minikube | vagrant | v1.32.0 | 20 Apr 24 18:22 UTC | 20 Apr 24 18:22 UTC | | start | | minikube | vagrant | v1.32.0 | 22 Apr 24 22:55 UTC | 22 Apr 24 22:57 UTC | | ip | | minikube | vagrant | v1.32.0 | 22 Apr 24 22:58 UTC | 22 Apr 24 22:58 UTC | | stop | | minikube | vagrant | v1.32.0 | 22 Apr 24 23:06 UTC | 22 Apr 24 23:07 UTC | | start | | minikube | vagrant | v1.32.0 | 22 Apr 24 23:14 UTC | 22 Apr 24 23:16 UTC | | service | prometheus-server-ext | minikube | vagrant | v1.32.0 | 22 Apr 24 23:16 UTC | 22 Apr 24 23:16 UTC | | start | --driver=hyperv | minikube | vagrant | v1.32.0 | 22 Apr 24 23:26 UTC | | | delete | | minikube | vagrant | v1.32.0 | 22 Apr 24 23:27 UTC | 22 Apr 24 23:27 UTC | | start | | minikube | vagrant | v1.32.0 | 22 Apr 24 23:27 UTC | | | delete | | minikube | vagrant | v1.32.0 | 22 Apr 24 23:28 UTC | 22 Apr 24 23:28 UTC | | start | --driver=hyperv | minikube | vagrant | v1.32.0 | 22 Apr 24 23:28 UTC | | | start | --vm-driver=hyperkit | minikube | vagrant | v1.32.0 | 22 Apr 24 23:30 UTC | | | start | --driver docker | minikube | vagrant | v1.32.0 | 22 Apr 24 23:31 UTC | 22 Apr 24 23:32 UTC | | start | | minikube | vagrant | v1.32.0 | 26 Apr 24 01:01 UTC | 26 Apr 24 01:07 UTC | | delete | | minikube | vagrant | v1.32.0 | 26 Apr 24 01:08 UTC | 26 Apr 24 01:08 UTC | | start | --vm-driver=hyperkit | minikube | vagrant | v1.32.0 | 26 Apr 24 01:09 UTC | | | start | | minikube | vagrant | v1.32.0 | 26 Apr 24 02:08 UTC | 26 Apr 24 02:13 UTC | | start | | minikube | vagrant | v1.32.0 | 26 Apr 24 02:24 UTC | | | stop | | minikube | vagrant | v1.32.0 | 26 Apr 24 02:26 UTC | 26 Apr 24 02:27 UTC | | start | | minikube | vagrant | v1.32.0 | 27 Apr 24 01:55 UTC | 27 Apr 24 02:00 UTC | | delete | pods | minikube | vagrant | v1.32.0 | 27 Apr 24 02:01 UTC | | | | sample-python-app-5c4ff9d694-d5xch | | | | | | | | sample-python-app-5c4ff9d694-xbwg8 | | | | | | | delete | | minikube | vagrant | v1.32.0 | 27 Apr 24 02:02 UTC | 27 Apr 24 02:02 UTC | | start | | minikube | vagrant | v1.32.0 | 27 Apr 24 02:02 UTC | 27 Apr 24 02:07 UTC | | stop | | minikube | vagrant | v1.32.0 | 27 Apr 24 02:19 UTC | 27 Apr 24 02:21 UTC | | start | | minikube | vagrant | v1.32.0 | 27 Apr 24 02:22 UTC | 27 Apr 24 02:24 UTC | | ip | | minikube | vagrant | v1.32.0 | 27 Apr 24 02:40 UTC | 27 Apr 24 02:40 UTC | | service | grafana-ext --url | minikube | vagrant | v1.32.0 | 27 Apr 24 02:50 UTC | | |---------|------------------------------------|----------|---------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2024/04/27 02:22:05 Running on machine: web01 Binary: Built with gc go1.21.3 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0427 02:22:05.717464 19836 out.go:296] Setting OutFile to fd 1 ... I0427 02:22:05.718733 19836 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color I0427 02:22:05.718746 19836 out.go:309] Setting ErrFile to fd 2... I0427 02:22:05.718759 19836 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color I0427 02:22:05.719234 19836 root.go:338] Updating PATH: /home/vagrant/.minikube/bin I0427 02:22:05.723557 19836 out.go:303] Setting JSON to false I0427 02:22:05.727865 19836 start.go:128] hostinfo: {"hostname":"web01","uptime":2254,"bootTime":1714182271,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"4.15.0-213-generic","kernelArch":"x86_64","virtualizationSystem":"vbox","virtualizationRole":"guest","hostId":"681bb735-e924-42b8-81ff-8a2456728914"} I0427 02:22:05.728253 19836 start.go:138] virtualization: vbox guest I0427 02:22:05.843106 19836 out.go:177] * minikube v1.32.0 on Ubuntu 18.04 (vbox/amd64) I0427 02:22:05.852319 19836 notify.go:220] Checking for updates... I0427 02:22:05.875446 19836 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3 I0427 02:22:05.875650 19836 driver.go:378] Setting default libvirt URI to qemu:///system I0427 02:22:06.374319 19836 docker.go:122] docker version: linux-20.10.21: I0427 02:22:06.374465 19836 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0427 02:22:07.273041 19836 info.go:266] docker info: {ID:ZHR2:6FMG:NABX:2VSX:WFGJ:ZVYL:5NP3:SLJ2:HR52:TLWI:LSD6:WEBB Containers:8 ContainersRunning:0 ContainersPaused:0 ContainersStopped:8 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:34 SystemTime:2024-04-27 02:22:06.978344598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-213-generic OperatingSystem:Ubuntu 18.04.6 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6440013824 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:web01 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0427 02:22:07.273536 19836 docker.go:295] overlay module found I0427 02:22:07.288017 19836 out.go:177] * Using the docker driver based on existing profile I0427 02:22:07.298767 19836 start.go:298] selected driver: docker I0427 02:22:07.298776 19836 start.go:902] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/vagrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} I0427 02:22:07.298867 19836 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0427 02:22:07.299221 19836 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0427 02:22:07.546458 19836 info.go:266] docker info: {ID:ZHR2:6FMG:NABX:2VSX:WFGJ:ZVYL:5NP3:SLJ2:HR52:TLWI:LSD6:WEBB Containers:8 ContainersRunning:0 ContainersPaused:0 ContainersStopped:8 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:34 SystemTime:2024-04-27 02:22:07.445663633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-213-generic OperatingSystem:Ubuntu 18.04.6 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6440013824 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:web01 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0427 02:22:07.547914 19836 cni.go:84] Creating CNI manager for "" I0427 02:22:07.547953 19836 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0427 02:22:07.548044 19836 start_flags.go:323] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/vagrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} I0427 02:22:07.551934 19836 out.go:177] * Starting control plane node minikube in cluster minikube I0427 02:22:07.562858 19836 cache.go:121] Beginning downloading kic base image for docker with docker I0427 02:22:07.588167 19836 out.go:177] * Pulling base image ... I0427 02:22:07.597886 19836 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker I0427 02:22:07.597945 19836 preload.go:148] Found local preload: /home/vagrant/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 I0427 02:22:07.598272 19836 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon I0427 02:22:07.601027 19836 cache.go:56] Caching tarball of preloaded images I0427 02:22:07.601315 19836 preload.go:174] Found /home/vagrant/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0427 02:22:07.601336 19836 cache.go:59] Finished verifying existence of preloaded tar for v1.28.3 on docker I0427 02:22:07.601537 19836 profile.go:148] Saving config to /home/vagrant/.minikube/profiles/minikube/config.json ... I0427 02:22:07.767641 19836 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull I0427 02:22:07.767667 19836 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load I0427 02:22:07.767691 19836 cache.go:194] Successfully downloaded all kic artifacts I0427 02:22:07.767744 19836 start.go:365] acquiring machines lock for minikube: {Name:mk50794f3b668552bcb175548a808224fc99ceb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0427 02:22:07.792061 19836 start.go:369] acquired machines lock for "minikube" in 7.416848ms I0427 02:22:07.792096 19836 start.go:96] Skipping create...Using existing machine configuration I0427 02:22:07.792106 19836 fix.go:54] fixHost starting: I0427 02:22:07.792582 19836 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0427 02:22:07.930081 19836 fix.go:102] recreateIfNeeded on minikube: state=Stopped err= W0427 02:22:07.930114 19836 fix.go:128] unexpected machine state, will restart: I0427 02:22:07.949095 19836 out.go:177] * Restarting existing docker container for "minikube" ... I0427 02:22:07.971345 19836 cli_runner.go:164] Run: docker start minikube I0427 02:22:11.439832 19836 cli_runner.go:217] Completed: docker start minikube: (3.460567615s) I0427 02:22:11.440007 19836 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0427 02:22:12.298788 19836 kic.go:430] container "minikube" state is running. I0427 02:22:12.300709 19836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0427 02:22:12.940897 19836 profile.go:148] Saving config to /home/vagrant/.minikube/profiles/minikube/config.json ... I0427 02:22:12.941404 19836 machine.go:88] provisioning docker machine ... I0427 02:22:12.941432 19836 ubuntu.go:169] provisioning hostname "minikube" I0427 02:22:12.941526 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:22:13.611844 19836 main.go:141] libmachine: Using SSH client type: native I0427 02:22:13.614094 19836 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} 127.0.0.1 49167 } I0427 02:22:13.614119 19836 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0427 02:22:13.628571 19836 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40628->127.0.0.1:49167: read: connection reset by peer I0427 02:22:18.202063 19836 main.go:141] libmachine: SSH cmd err, output: : minikube I0427 02:22:18.232979 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:22:18.684403 19836 main.go:141] libmachine: Using SSH client type: native I0427 02:22:18.685219 19836 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} 127.0.0.1 49167 } I0427 02:22:18.685255 19836 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0427 02:22:20.260955 19836 main.go:141] libmachine: SSH cmd err, output: : I0427 02:22:20.260979 19836 ubuntu.go:175] set auth options {CertDir:/home/vagrant/.minikube CaCertPath:/home/vagrant/.minikube/certs/ca.pem CaPrivateKeyPath:/home/vagrant/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/vagrant/.minikube/machines/server.pem ServerKeyPath:/home/vagrant/.minikube/machines/server-key.pem ClientKeyPath:/home/vagrant/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/vagrant/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/vagrant/.minikube} I0427 02:22:20.277929 19836 ubuntu.go:177] setting up certificates I0427 02:22:20.277949 19836 provision.go:83] configureAuth start I0427 02:22:20.278047 19836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0427 02:22:21.404591 19836 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube: (1.126469454s) I0427 02:22:21.404647 19836 provision.go:138] copyHostCerts I0427 02:22:21.404721 19836 exec_runner.go:144] found /home/vagrant/.minikube/ca.pem, removing ... I0427 02:22:21.404733 19836 exec_runner.go:203] rm: /home/vagrant/.minikube/ca.pem I0427 02:22:21.404826 19836 exec_runner.go:151] cp: /home/vagrant/.minikube/certs/ca.pem --> /home/vagrant/.minikube/ca.pem (1078 bytes) I0427 02:22:21.405483 19836 exec_runner.go:144] found /home/vagrant/.minikube/cert.pem, removing ... I0427 02:22:21.405496 19836 exec_runner.go:203] rm: /home/vagrant/.minikube/cert.pem I0427 02:22:21.405560 19836 exec_runner.go:151] cp: /home/vagrant/.minikube/certs/cert.pem --> /home/vagrant/.minikube/cert.pem (1123 bytes) I0427 02:22:21.405715 19836 exec_runner.go:144] found /home/vagrant/.minikube/key.pem, removing ... I0427 02:22:21.405725 19836 exec_runner.go:203] rm: /home/vagrant/.minikube/key.pem I0427 02:22:21.405820 19836 exec_runner.go:151] cp: /home/vagrant/.minikube/certs/key.pem --> /home/vagrant/.minikube/key.pem (1675 bytes) I0427 02:22:21.405957 19836 provision.go:112] generating server cert: /home/vagrant/.minikube/machines/server.pem ca-key=/home/vagrant/.minikube/certs/ca.pem private-key=/home/vagrant/.minikube/certs/ca-key.pem org=vagrant.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0427 02:22:22.332555 19836 provision.go:172] copyRemoteCerts I0427 02:22:22.332646 19836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0427 02:22:22.332715 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:22:22.696449 19836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:docker} I0427 02:22:23.248065 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0427 02:22:23.422943 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes) I0427 02:22:23.646483 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0427 02:22:23.909448 19836 provision.go:86] duration metric: configureAuth took 3.631473788s I0427 02:22:23.909481 19836 ubuntu.go:193] setting minikube options for container-runtime I0427 02:22:23.909830 19836 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3 I0427 02:22:23.909926 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:22:24.842652 19836 main.go:141] libmachine: Using SSH client type: native I0427 02:22:24.843687 19836 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} 127.0.0.1 49167 } I0427 02:22:24.843700 19836 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0427 02:22:25.548321 19836 main.go:141] libmachine: SSH cmd err, output: : overlay I0427 02:22:25.548343 19836 ubuntu.go:71] root file system type: overlay I0427 02:22:25.549216 19836 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0427 02:22:25.549345 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:22:26.662474 19836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: (1.113051539s) I0427 02:22:26.662694 19836 main.go:141] libmachine: Using SSH client type: native I0427 02:22:26.663355 19836 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} 127.0.0.1 49167 } I0427 02:22:26.663506 19836 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0427 02:22:27.236571 19836 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0427 02:22:27.236702 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:22:27.457431 19836 main.go:141] libmachine: Using SSH client type: native I0427 02:22:27.458245 19836 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x808a40] 0x80b720 [] 0s} 127.0.0.1 49167 } I0427 02:22:27.458281 19836 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0427 02:22:27.822387 19836 main.go:141] libmachine: SSH cmd err, output: : I0427 02:22:27.822414 19836 machine.go:91] provisioned docker machine in 14.880994338s I0427 02:22:27.822431 19836 start.go:300] post-start starting for "minikube" (driver="docker") I0427 02:22:27.822449 19836 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0427 02:22:27.822550 19836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0427 02:22:27.823126 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:22:28.096675 19836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:docker} I0427 02:22:28.284348 19836 ssh_runner.go:195] Run: cat /etc/os-release I0427 02:22:28.303894 19836 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0427 02:22:28.303928 19836 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0427 02:22:28.303941 19836 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0427 02:22:28.303950 19836 info.go:137] Remote host: Ubuntu 22.04.3 LTS I0427 02:22:28.303962 19836 filesync.go:126] Scanning /home/vagrant/.minikube/addons for local assets ... I0427 02:22:28.304123 19836 filesync.go:126] Scanning /home/vagrant/.minikube/files for local assets ... I0427 02:22:28.304169 19836 start.go:303] post-start completed in 481.726122ms I0427 02:22:28.304245 19836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0427 02:22:28.304328 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:22:28.469003 19836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:docker} I0427 02:22:28.711670 19836 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0427 02:22:28.754911 19836 fix.go:56] fixHost completed within 20.962798052s I0427 02:22:28.754938 19836 start.go:83] releasing machines lock for "minikube", held for 20.962854137s I0427 02:22:28.755605 19836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0427 02:22:29.015958 19836 ssh_runner.go:195] Run: cat /version.json I0427 02:22:29.016039 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:22:29.016201 19836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0427 02:22:29.016276 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:22:29.923007 19836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:docker} I0427 02:22:29.923224 19836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:docker} I0427 02:22:30.180305 19836 ssh_runner.go:235] Completed: cat /version.json: (1.164303903s) I0427 02:22:30.180946 19836 ssh_runner.go:195] Run: systemctl --version I0427 02:22:30.706200 19836 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (1.689947335s) I0427 02:22:30.706597 19836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0427 02:22:30.725703 19836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0427 02:22:30.791869 19836 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0427 02:22:30.791972 19836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0427 02:22:30.827080 19836 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable I0427 02:22:30.827109 19836 start.go:472] detecting cgroup driver to use... I0427 02:22:30.827163 19836 detect.go:196] detected "cgroupfs" cgroup driver on host os I0427 02:22:30.855194 19836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0427 02:22:30.935870 19836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0427 02:22:30.967059 19836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0427 02:22:30.998750 19836 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver... I0427 02:22:30.998832 19836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml" I0427 02:22:31.043355 19836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0427 02:22:31.076559 19836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0427 02:22:31.120247 19836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0427 02:22:31.157749 19836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0427 02:22:31.193723 19836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0427 02:22:31.225711 19836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0427 02:22:31.293990 19836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0427 02:22:31.323266 19836 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0427 02:22:31.580775 19836 ssh_runner.go:195] Run: sudo systemctl restart containerd I0427 02:22:32.528090 19836 start.go:472] detecting cgroup driver to use... I0427 02:22:32.528159 19836 detect.go:196] detected "cgroupfs" cgroup driver on host os I0427 02:22:32.528258 19836 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0427 02:22:32.726567 19836 cruntime.go:279] skipping containerd shutdown because we are bound to it I0427 02:22:32.726678 19836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0427 02:22:32.982628 19836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0427 02:22:33.197972 19836 ssh_runner.go:195] Run: which cri-dockerd I0427 02:22:33.320878 19836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0427 02:22:33.463592 19836 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0427 02:22:33.673472 19836 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0427 02:22:34.223860 19836 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0427 02:22:35.790570 19836 ssh_runner.go:235] Completed: sudo systemctl enable docker.socket: (1.566661993s) I0427 02:22:35.790600 19836 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver... I0427 02:22:35.790814 19836 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes) I0427 02:22:35.922081 19836 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0427 02:22:36.612184 19836 ssh_runner.go:195] Run: sudo systemctl restart docker I0427 02:22:41.248458 19836 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.636220868s) I0427 02:22:41.248562 19836 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0427 02:22:41.439061 19836 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0427 02:22:41.658585 19836 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0427 02:22:41.921990 19836 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0427 02:22:42.139115 19836 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0427 02:22:42.202749 19836 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0427 02:22:42.462649 19836 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0427 02:22:44.095293 19836 ssh_runner.go:235] Completed: sudo systemctl restart cri-docker: (1.632597819s) I0427 02:22:44.095324 19836 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock I0427 02:22:44.095408 19836 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0427 02:22:44.137975 19836 start.go:540] Will wait 60s for crictl version I0427 02:22:44.138037 19836 ssh_runner.go:195] Run: which crictl I0427 02:22:44.218318 19836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0427 02:22:44.874977 19836 start.go:556] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 24.0.7 RuntimeApiVersion: v1 I0427 02:22:44.875074 19836 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0427 02:22:45.111831 19836 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0427 02:22:45.403955 19836 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.7 ... I0427 02:22:45.404490 19836 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0427 02:22:45.684394 19836 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0427 02:22:45.705402 19836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0427 02:22:45.791024 19836 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker I0427 02:22:45.791144 19836 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0427 02:22:45.955700 19836 docker.go:671] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0427 02:22:45.955740 19836 docker.go:601] Images already preloaded, skipping extraction I0427 02:22:45.955812 19836 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0427 02:22:46.194458 19836 docker.go:671] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0427 02:22:46.194484 19836 cache_images.go:84] Images are preloaded, skipping loading I0427 02:22:46.194597 19836 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0427 02:22:47.060909 19836 cni.go:84] Creating CNI manager for "" I0427 02:22:47.060938 19836 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0427 02:22:47.060977 19836 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0427 02:22:47.061013 19836 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0427 02:22:47.061297 19836 kubeadm.go:181] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.28.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0427 02:22:47.061529 19836 kubeadm.go:976] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0427 02:22:47.061636 19836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3 I0427 02:22:47.138086 19836 binaries.go:44] Found k8s binaries, skipping transfer I0427 02:22:47.138972 19836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0427 02:22:47.170748 19836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes) I0427 02:22:47.237730 19836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0427 02:22:47.322497 19836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2091 bytes) I0427 02:22:47.396749 19836 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0427 02:22:47.443963 19836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0427 02:22:47.525724 19836 certs.go:56] Setting up /home/vagrant/.minikube/profiles/minikube for IP: 192.168.49.2 I0427 02:22:47.525772 19836 certs.go:190] acquiring lock for shared ca certs: {Name:mk99734a69f246b009342ee30e5dd25cb3da1093 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0427 02:22:47.526203 19836 certs.go:199] skipping minikubeCA CA generation: /home/vagrant/.minikube/ca.key I0427 02:22:47.526295 19836 certs.go:199] skipping proxyClientCA CA generation: /home/vagrant/.minikube/proxy-client-ca.key I0427 02:22:47.526445 19836 certs.go:315] skipping minikube-user signed cert generation: /home/vagrant/.minikube/profiles/minikube/client.key I0427 02:22:47.526595 19836 certs.go:315] skipping minikube signed cert generation: /home/vagrant/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0427 02:22:47.526678 19836 certs.go:315] skipping aggregator signed cert generation: /home/vagrant/.minikube/profiles/minikube/proxy-client.key I0427 02:22:47.527127 19836 certs.go:437] found cert: /home/vagrant/.minikube/certs/home/vagrant/.minikube/certs/ca-key.pem (1675 bytes) I0427 02:22:47.527198 19836 certs.go:437] found cert: /home/vagrant/.minikube/certs/home/vagrant/.minikube/certs/ca.pem (1078 bytes) I0427 02:22:47.527264 19836 certs.go:437] found cert: /home/vagrant/.minikube/certs/home/vagrant/.minikube/certs/cert.pem (1123 bytes) I0427 02:22:47.527622 19836 certs.go:437] found cert: /home/vagrant/.minikube/certs/home/vagrant/.minikube/certs/key.pem (1675 bytes) I0427 02:22:47.535011 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0427 02:22:47.643803 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0427 02:22:47.722097 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0427 02:22:47.962704 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0427 02:22:48.067317 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0427 02:22:48.240330 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0427 02:22:48.358199 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0427 02:22:48.453495 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0427 02:22:48.632494 19836 ssh_runner.go:362] scp /home/vagrant/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0427 02:22:48.754816 19836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0427 02:22:48.961906 19836 ssh_runner.go:195] Run: openssl version I0427 02:22:49.077685 19836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0427 02:22:49.232203 19836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0427 02:22:49.384195 19836 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Mar 25 22:34 /usr/share/ca-certificates/minikubeCA.pem I0427 02:22:49.384294 19836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0427 02:22:49.450636 19836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0427 02:22:49.564613 19836 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd I0427 02:22:49.596446 19836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400 I0427 02:22:49.629313 19836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400 I0427 02:22:49.666040 19836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400 I0427 02:22:49.702645 19836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400 I0427 02:22:49.724313 19836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400 I0427 02:22:49.743411 19836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400 I0427 02:22:49.764607 19836 kubeadm.go:404] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/vagrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} I0427 02:22:49.764807 19836 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0427 02:22:49.994893 19836 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0427 02:22:50.048000 19836 kubeadm.go:419] found existing configuration files, will attempt cluster restart I0427 02:22:50.048018 19836 kubeadm.go:636] restartCluster start I0427 02:22:50.048110 19836 ssh_runner.go:195] Run: sudo test -d /data/minikube I0427 02:22:50.141058 19836 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0427 02:22:50.142045 19836 kubeconfig.go:135] verify returned: extract IP: "minikube" does not appear in /home/vagrant/.kube/config I0427 02:22:50.142450 19836 kubeconfig.go:146] "minikube" context is missing from /home/vagrant/.kube/config - will repair! I0427 02:22:50.145770 19836 lock.go:35] WriteFile acquiring /home/vagrant/.kube/config: {Name:mk584b224ce915a9a9ad34e6e788268489afc021 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0427 02:22:50.156897 19836 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0427 02:22:50.203349 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:50.203730 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:50.244749 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:50.255835 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:50.255977 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:50.338155 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:50.839580 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:50.839905 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:50.883811 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:51.338641 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:51.338781 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:51.365556 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:51.841864 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:51.841977 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:51.882070 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:52.339782 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:52.339926 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:52.374513 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:52.838823 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:52.838931 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:52.884094 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:53.339054 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:53.339191 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:53.369605 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:53.839846 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:53.839984 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:53.899776 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:54.338626 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:54.338731 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:54.372969 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:54.838542 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:54.838939 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:54.866444 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:55.338541 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:55.338628 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:55.433819 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:55.839546 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:55.839774 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:55.875832 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:56.344184 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:56.344313 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:56.413883 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:56.838394 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:56.838514 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:56.873130 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:57.339734 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:57.339845 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:57.368840 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:57.838365 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:57.838582 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:57.881065 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:58.338457 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:58.338578 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:58.370883 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:58.839319 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:58.839617 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:58.882770 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:59.338841 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:59.338942 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:59.372649 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:22:59.838341 19836 api_server.go:166] Checking apiserver status ... I0427 02:22:59.838591 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* W0427 02:22:59.868369 19836 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1 stdout: stderr: I0427 02:23:00.204874 19836 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded I0427 02:23:00.204908 19836 kubeadm.go:1128] stopping kube-system containers ... I0427 02:23:00.205032 19836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0427 02:23:00.556942 19836 docker.go:469] Stopping containers: [41caa1a81f66 55620d370cac b1e971b66fb2 c07f5aedabbf b954d7913f2d d25c9a7433ac 3ad0f2ae7196 116834c2dbca 2bca960dd49c 031a421d02e5 5e979886135b 632bfebdfad9 b78e506e5fcb 9a332c5b1e30 ce8ffd913ace] I0427 02:23:00.557058 19836 ssh_runner.go:195] Run: docker stop 41caa1a81f66 55620d370cac b1e971b66fb2 c07f5aedabbf b954d7913f2d d25c9a7433ac 3ad0f2ae7196 116834c2dbca 2bca960dd49c 031a421d02e5 5e979886135b 632bfebdfad9 b78e506e5fcb 9a332c5b1e30 ce8ffd913ace I0427 02:23:01.103274 19836 ssh_runner.go:195] Run: sudo systemctl stop kubelet I0427 02:23:01.150269 19836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0427 02:23:01.198184 19836 kubeadm.go:155] found existing configuration files: -rw------- 1 root root 5639 Apr 27 02:04 /etc/kubernetes/admin.conf -rw------- 1 root root 5656 Apr 27 02:04 /etc/kubernetes/controller-manager.conf -rw------- 1 root root 1971 Apr 27 02:06 /etc/kubernetes/kubelet.conf -rw------- 1 root root 5600 Apr 27 02:04 /etc/kubernetes/scheduler.conf I0427 02:23:01.198292 19836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0427 02:23:01.241968 19836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0427 02:23:01.312550 19836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0427 02:23:01.337023 19836 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1 stdout: stderr: I0427 02:23:01.337106 19836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0427 02:23:01.362358 19836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0427 02:23:01.391237 19836 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1 stdout: stderr: I0427 02:23:01.391316 19836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0427 02:23:01.446081 19836 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0427 02:23:01.482311 19836 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0427 02:23:01.482338 19836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0427 02:23:02.016321 19836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0427 02:23:04.523935 19836 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.507563376s) I0427 02:23:04.523984 19836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" I0427 02:23:05.623993 19836 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (1.099960362s) I0427 02:23:05.624036 19836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0427 02:23:08.476356 19836 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml": (2.852275706s) I0427 02:23:08.476396 19836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0427 02:23:10.273438 19836 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml": (1.797003263s) I0427 02:23:10.273488 19836 api_server.go:52] waiting for apiserver process to appear ... I0427 02:23:10.273575 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:10.517600 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:11.235099 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:11.750026 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:12.223782 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:12.729688 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:13.234619 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:13.731550 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:14.244345 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:14.728572 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:15.224246 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:15.739920 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:16.230650 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:16.729319 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:17.223737 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:17.724395 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:18.227814 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:18.743246 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:19.243488 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:19.733486 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:20.233173 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:20.724213 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:21.229307 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:21.724725 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:22.226493 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:22.736310 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:23.228330 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:23.723793 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:24.223074 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:24.723788 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:25.249479 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:25.723218 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:26.299652 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:26.725996 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:27.224603 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:27.723286 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:28.323946 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:28.730102 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:29.246724 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:29.841693 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:30.226957 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:31.011604 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:23:31.992566 19836 api_server.go:72] duration metric: took 21.719069635s to wait for apiserver process to appear ... I0427 02:23:31.992592 19836 api_server.go:88] waiting for apiserver healthz status ... I0427 02:23:31.992620 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:32.021637 19836 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused I0427 02:23:32.021675 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:32.022557 19836 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused I0427 02:23:32.527809 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:32.536209 19836 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused I0427 02:23:33.035605 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:33.036218 19836 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused I0427 02:23:33.523782 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:33.525066 19836 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused I0427 02:23:34.023084 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:34.025337 19836 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused I0427 02:23:34.523930 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:39.525620 19836 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0427 02:23:39.525676 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:44.526667 19836 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0427 02:23:44.526722 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:49.529408 19836 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0427 02:23:49.529452 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:54.531031 19836 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0427 02:23:54.532149 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:58.101253 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0427 02:23:58.101299 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} I0427 02:23:58.101327 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:58.278833 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld [+]poststarthook/start-system-namespaces-controller ok [-]poststarthook/bootstrap-controller failed: reason withheld [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [-]poststarthook/apiservice-registration-controller failed: reason withheld [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:23:58.278875 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld [+]poststarthook/start-system-namespaces-controller ok [-]poststarthook/bootstrap-controller failed: reason withheld [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [-]poststarthook/apiservice-registration-controller failed: reason withheld [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:23:58.524083 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:58.675076 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:23:58.675112 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:23:59.023676 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:59.195318 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:23:59.195376 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:23:59.523117 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:23:59.697329 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:23:59.697388 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:00.024997 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:00.107189 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:00.107230 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:00.542089 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:00.674469 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:00.674506 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:01.025161 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:01.311920 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:01.311963 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:01.525220 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:01.923502 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:01.923542 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:02.024381 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:02.192989 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:02.193028 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:02.544904 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:02.792394 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:02.792435 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:03.043623 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:03.699930 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:03.699959 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:03.699980 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:04.083925 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:04.086419 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:04.086459 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:04.304651 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:04.304686 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:04.589754 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:04.778427 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:04.778467 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:05.024449 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:05.291804 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:05.291859 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:05.556166 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:05.813124 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:05.813163 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:06.029267 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:06.176496 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:06.176535 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:06.531567 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:06.927028 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:06.927072 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:07.023940 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:07.176747 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:07.176789 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:07.523769 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:07.711097 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:07.711135 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:08.032488 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:08.388806 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:08.388838 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:08.523882 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:08.775879 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:08.775914 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:09.039357 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:09.293885 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:09.293924 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:09.522849 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:09.611191 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:09.611234 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:10.023620 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:10.116225 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:10.116270 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:10.523171 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:10.585001 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:10.585039 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:11.023893 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:11.300802 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:11.300841 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:11.528923 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:11.611144 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:11.611179 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:12.031544 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:12.173481 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:12.173514 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:12.523919 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:12.583916 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:12.583950 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:13.032786 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:13.195260 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:13.195295 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:13.523155 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:13.699607 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:13.699648 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:14.023967 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:14.149766 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:14.149809 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:14.523772 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:14.685333 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:14.685358 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:15.023664 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:15.289573 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed W0427 02:24:15.289608 19836 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed I0427 02:24:15.558148 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:15.773876 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 200: ok I0427 02:24:15.907253 19836 api_server.go:141] control plane version: v1.28.3 I0427 02:24:15.907299 19836 api_server.go:131] duration metric: took 43.914693957s to wait for apiserver health ... I0427 02:24:15.907316 19836 cni.go:84] Creating CNI manager for "" I0427 02:24:15.907346 19836 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0427 02:24:15.934567 19836 out.go:177] * Configuring bridge CNI (Container Networking Interface) ... I0427 02:24:15.942796 19836 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0427 02:24:16.210672 19836 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0427 02:24:16.721074 19836 system_pods.go:43] waiting for kube-system pods to appear ... I0427 02:24:17.035121 19836 system_pods.go:59] 7 kube-system pods found I0427 02:24:17.035159 19836 system_pods.go:61] "coredns-5dd5756b68-swq5c" [b5551d7f-7040-42a6-8dcd-2ca91d12b367] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0427 02:24:17.035170 19836 system_pods.go:61] "etcd-minikube" [4c0d0f9d-1212-4ab7-8f4c-9ee39e025294] Running I0427 02:24:17.035186 19836 system_pods.go:61] "kube-apiserver-minikube" [6a838324-58d4-42b1-bdfb-91711d971c44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0427 02:24:17.035199 19836 system_pods.go:61] "kube-controller-manager-minikube" [e8b2975c-fa8a-452b-86a6-23be82a30c2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0427 02:24:17.035211 19836 system_pods.go:61] "kube-proxy-z5fkd" [124f7a28-7df6-4b11-a0ab-b92008d9c32e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy]) I0427 02:24:17.035221 19836 system_pods.go:61] "kube-scheduler-minikube" [24e05ad7-ffce-481a-aa2e-1f33e16844be] Running I0427 02:24:17.035235 19836 system_pods.go:61] "storage-provisioner" [4eca0b32-4f34-4a7b-839a-2643f3c8dafe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0427 02:24:17.035246 19836 system_pods.go:74] duration metric: took 314.151082ms to wait for pod list to return data ... I0427 02:24:17.036066 19836 node_conditions.go:102] verifying NodePressure condition ... I0427 02:24:17.125504 19836 node_conditions.go:122] node storage ephemeral capacity is 40581564Ki I0427 02:24:17.125535 19836 node_conditions.go:123] node cpu capacity is 6 I0427 02:24:17.125556 19836 node_conditions.go:105] duration metric: took 89.476953ms to run NodePressure ... I0427 02:24:17.125590 19836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0427 02:24:24.683778 19836 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (7.558146343s) I0427 02:24:24.683825 19836 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0427 02:24:24.795929 19836 ops.go:34] apiserver oom_adj: -16 I0427 02:24:24.795952 19836 kubeadm.go:640] restartCluster took 1m34.747923431s I0427 02:24:24.795966 19836 kubeadm.go:406] StartCluster complete in 1m35.031374757s I0427 02:24:24.795998 19836 settings.go:142] acquiring lock: {Name:mkb2c3059065ecb0ccdda4c7ff3af85f8f0082c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0427 02:24:24.796213 19836 settings.go:150] Updating kubeconfig: /home/vagrant/.kube/config I0427 02:24:24.797521 19836 lock.go:35] WriteFile acquiring /home/vagrant/.kube/config: {Name:mk584b224ce915a9a9ad34e6e788268489afc021 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0427 02:24:24.823193 19836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0427 02:24:24.827073 19836 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3 I0427 02:24:24.831847 19836 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] I0427 02:24:24.836544 19836 addons.go:69] Setting storage-provisioner=true in profile "minikube" I0427 02:24:24.836576 19836 addons.go:231] Setting addon storage-provisioner=true in "minikube" W0427 02:24:24.836589 19836 addons.go:240] addon storage-provisioner should already be in state true I0427 02:24:24.836660 19836 host.go:66] Checking if "minikube" exists ... I0427 02:24:24.836956 19836 addons.go:69] Setting default-storageclass=true in profile "minikube" I0427 02:24:24.836980 19836 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0427 02:24:24.837380 19836 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0427 02:24:24.837490 19836 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0427 02:24:24.918548 19836 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0427 02:24:24.918588 19836 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0427 02:24:25.011601 19836 out.go:177] * Verifying Kubernetes components... I0427 02:24:25.132298 19836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0427 02:24:25.634512 19836 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0427 02:24:25.788031 19836 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml I0427 02:24:25.788053 19836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0427 02:24:25.804372 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:24:25.911926 19836 cli_runner.go:217] Completed: docker container inspect minikube --format={{.State.Status}}: (1.074494231s) I0427 02:24:25.913333 19836 addons.go:231] Setting addon default-storageclass=true in "minikube" W0427 02:24:25.913348 19836 addons.go:240] addon default-storageclass should already be in state true I0427 02:24:25.913395 19836 host.go:66] Checking if "minikube" exists ... I0427 02:24:25.913940 19836 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0427 02:24:26.602359 19836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:docker} I0427 02:24:26.613845 19836 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml I0427 02:24:26.613870 19836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0427 02:24:26.614003 19836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0427 02:24:27.197531 19836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/vagrant/.minikube/machines/minikube/id_rsa Username:docker} I0427 02:24:27.693618 19836 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (2.87039399s) I0427 02:24:27.693698 19836 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping... I0427 02:24:27.693722 19836 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.561404451s) I0427 02:24:27.693757 19836 api_server.go:52] waiting for apiserver process to appear ... I0427 02:24:27.693813 19836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0427 02:24:27.817270 19836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0427 02:24:27.903505 19836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0427 02:24:27.907667 19836 api_server.go:72] duration metric: took 2.989043374s to wait for apiserver process to appear ... I0427 02:24:27.907688 19836 api_server.go:88] waiting for apiserver healthz status ... I0427 02:24:27.907712 19836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0427 02:24:28.001181 19836 api_server.go:279] https://192.168.49.2:8443/healthz returned 200: ok I0427 02:24:28.011652 19836 api_server.go:141] control plane version: v1.28.3 I0427 02:24:28.011675 19836 api_server.go:131] duration metric: took 103.975285ms to wait for apiserver health ... I0427 02:24:28.011688 19836 system_pods.go:43] waiting for kube-system pods to appear ... I0427 02:24:28.112192 19836 system_pods.go:59] 7 kube-system pods found I0427 02:24:28.112217 19836 system_pods.go:61] "coredns-5dd5756b68-swq5c" [b5551d7f-7040-42a6-8dcd-2ca91d12b367] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0427 02:24:28.112226 19836 system_pods.go:61] "etcd-minikube" [4c0d0f9d-1212-4ab7-8f4c-9ee39e025294] Running I0427 02:24:28.112233 19836 system_pods.go:61] "kube-apiserver-minikube" [6a838324-58d4-42b1-bdfb-91711d971c44] Running I0427 02:24:28.112241 19836 system_pods.go:61] "kube-controller-manager-minikube" [e8b2975c-fa8a-452b-86a6-23be82a30c2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0427 02:24:28.112247 19836 system_pods.go:61] "kube-proxy-z5fkd" [124f7a28-7df6-4b11-a0ab-b92008d9c32e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy]) I0427 02:24:28.112253 19836 system_pods.go:61] "kube-scheduler-minikube" [24e05ad7-ffce-481a-aa2e-1f33e16844be] Running I0427 02:24:28.112259 19836 system_pods.go:61] "storage-provisioner" [4eca0b32-4f34-4a7b-839a-2643f3c8dafe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0427 02:24:28.112267 19836 system_pods.go:74] duration metric: took 100.57043ms to wait for pod list to return data ... I0427 02:24:28.112278 19836 kubeadm.go:581] duration metric: took 3.193664133s to wait for : map[apiserver:true system_pods:true] ... I0427 02:24:28.112294 19836 node_conditions.go:102] verifying NodePressure condition ... I0427 02:24:28.300214 19836 node_conditions.go:122] node storage ephemeral capacity is 40581564Ki I0427 02:24:28.300236 19836 node_conditions.go:123] node cpu capacity is 6 I0427 02:24:28.300252 19836 node_conditions.go:105] duration metric: took 187.951548ms to run NodePressure ... I0427 02:24:28.300272 19836 start.go:228] waiting for startup goroutines ... I0427 02:24:41.707224 19836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.88990959s) I0427 02:24:41.707300 19836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.803773659s) I0427 02:24:41.809981 19836 out.go:177] * Enabled addons: storage-provisioner, default-storageclass I0427 02:24:41.820765 19836 addons.go:502] enable addons completed in 16.988949928s: enabled=[storage-provisioner default-storageclass] I0427 02:24:41.820886 19836 start.go:233] waiting for cluster config update ... I0427 02:24:41.820901 19836 start.go:242] writing updated cluster config ... I0427 02:24:41.821370 19836 ssh_runner.go:195] Run: rm -f paused I0427 02:24:43.467966 19836 start.go:600] kubectl: 1.18.2-0-g52c56ce, cluster: 1.28.3 (minor skew: 10) I0427 02:24:43.471562 19836 out.go:177] W0427 02:24:43.554133 19836 out.go:239] ! /usr/local/bin/kubectl is version 1.18.2-0-g52c56ce, which may have incompatibilities with Kubernetes 1.28.3. I0427 02:24:43.570720 19836 out.go:177] - Want kubectl v1.28.3? Try 'minikube kubectl -- get pods -A' I0427 02:24:43.584198 19836 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * Apr 27 02:42:44 minikube cri-dockerd[1121]: time="2024-04-27T02:42:44Z" level=info msg="Pulling image registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.12.0: 33e068de2649: Extracting [==================================================>] 122B/122B" Apr 27 02:42:54 minikube cri-dockerd[1121]: time="2024-04-27T02:42:54Z" level=info msg="Pulling image registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.12.0: 9112d77ee5b1: Extracting [==================================================>] 122.6kB/122.6kB" Apr 27 02:43:04 minikube cri-dockerd[1121]: time="2024-04-27T02:43:04Z" level=info msg="Pulling image registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.12.0: 9300d24220e2: Extracting [============> ] 3.277MB/12.69MB" Apr 27 02:43:14 minikube cri-dockerd[1121]: time="2024-04-27T02:43:14Z" level=info msg="Pulling image registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.12.0: 9300d24220e2: Extracting [========================> ] 6.16MB/12.69MB" Apr 27 02:43:24 minikube cri-dockerd[1121]: time="2024-04-27T02:43:24Z" level=info msg="Pulling image registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.12.0: 9300d24220e2: Extracting [====================================> ] 9.175MB/12.69MB" Apr 27 02:43:34 minikube cri-dockerd[1121]: time="2024-04-27T02:43:34Z" level=info msg="Pulling image registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.12.0: 9300d24220e2: Extracting [==================================================>] 12.69MB/12.69MB" Apr 27 02:43:36 minikube cri-dockerd[1121]: time="2024-04-27T02:43:36Z" level=info msg="Stop pulling image registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.12.0: Status: Downloaded newer image for registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.12.0" Apr 27 02:43:53 minikube cri-dockerd[1121]: time="2024-04-27T02:43:53Z" level=info msg="Pulling image quay.io/prometheus/pushgateway:v1.8.0: 9fa9226be034: Extracting [==================================================>] 783kB/783kB" Apr 27 02:44:03 minikube cri-dockerd[1121]: time="2024-04-27T02:44:03Z" level=info msg="Pulling image quay.io/prometheus/pushgateway:v1.8.0: 5ecd4e5a426f: Download complete " Apr 27 02:44:06 minikube cri-dockerd[1121]: time="2024-04-27T02:44:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d9f60fd09f1db077a21b134db4a0a0c94878593d3643e575f7656f3b4c61314c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]" Apr 27 02:44:13 minikube cri-dockerd[1121]: time="2024-04-27T02:44:13Z" level=info msg="Pulling image quay.io/prometheus/pushgateway:v1.8.0: 1617e25568b2: Extracting [==================================================>] 480.9kB/480.9kB" Apr 27 02:44:23 minikube cri-dockerd[1121]: time="2024-04-27T02:44:23Z" level=info msg="Pulling image quay.io/prometheus/pushgateway:v1.8.0: 5ecd4e5a426f: Extracting [====> ] 1.049MB/10.64MB" Apr 27 02:44:33 minikube cri-dockerd[1121]: time="2024-04-27T02:44:33Z" level=info msg="Pulling image quay.io/prometheus/pushgateway:v1.8.0: 5ecd4e5a426f: Extracting [=============================> ] 6.291MB/10.64MB" Apr 27 02:44:43 minikube cri-dockerd[1121]: time="2024-04-27T02:44:43Z" level=info msg="Pulling image quay.io/prometheus/pushgateway:v1.8.0: 4f4fb700ef54: Extracting [==================================================>] 32B/32B" Apr 27 02:44:45 minikube cri-dockerd[1121]: time="2024-04-27T02:44:45Z" level=info msg="Stop pulling image quay.io/prometheus/pushgateway:v1.8.0: Status: Downloaded newer image for quay.io/prometheus/pushgateway:v1.8.0" Apr 27 02:45:01 minikube cri-dockerd[1121]: time="2024-04-27T02:45:01Z" level=info msg="Pulling image quay.io/prometheus-operator/prometheus-config-reloader:v0.72.0: bb8fa48b526c: Downloading [===================> ] 4.745MB/12.35MB" Apr 27 02:45:11 minikube cri-dockerd[1121]: time="2024-04-27T02:45:10Z" level=info msg="Pulling image quay.io/prometheus-operator/prometheus-config-reloader:v0.72.0: bb8fa48b526c: Extracting [==> ] 655.4kB/12.35MB" Apr 27 02:45:21 minikube cri-dockerd[1121]: time="2024-04-27T02:45:21Z" level=info msg="Pulling image quay.io/prometheus-operator/prometheus-config-reloader:v0.72.0: bb8fa48b526c: Extracting [================> ] 4.063MB/12.35MB" Apr 27 02:45:31 minikube cri-dockerd[1121]: time="2024-04-27T02:45:31Z" level=info msg="Pulling image quay.io/prometheus-operator/prometheus-config-reloader:v0.72.0: bb8fa48b526c: Extracting [==============================> ] 7.471MB/12.35MB" Apr 27 02:45:41 minikube cri-dockerd[1121]: time="2024-04-27T02:45:41Z" level=info msg="Pulling image quay.io/prometheus-operator/prometheus-config-reloader:v0.72.0: bb8fa48b526c: Extracting [==================================> ] 8.52MB/12.35MB" Apr 27 02:45:50 minikube cri-dockerd[1121]: time="2024-04-27T02:45:50Z" level=info msg="Pulling image quay.io/prometheus-operator/prometheus-config-reloader:v0.72.0: bb8fa48b526c: Extracting [=================================================> ] 12.19MB/12.35MB" Apr 27 02:45:52 minikube cri-dockerd[1121]: time="2024-04-27T02:45:52Z" level=info msg="Stop pulling image quay.io/prometheus-operator/prometheus-config-reloader:v0.72.0: Status: Downloaded newer image for quay.io/prometheus-operator/prometheus-config-reloader:v0.72.0" Apr 27 02:46:11 minikube cri-dockerd[1121]: time="2024-04-27T02:46:11Z" level=info msg="Pulling image quay.io/prometheus/alertmanager:v0.27.0: 697c2f9afe54: Downloading [===============> ] 5.405MB/17.5MB" Apr 27 02:46:21 minikube cri-dockerd[1121]: time="2024-04-27T02:46:21Z" level=info msg="Pulling image quay.io/prometheus/alertmanager:v0.27.0: 697c2f9afe54: Downloading [============================================> ] 15.53MB/17.5MB" Apr 27 02:46:31 minikube cri-dockerd[1121]: time="2024-04-27T02:46:31Z" level=info msg="Pulling image quay.io/prometheus/alertmanager:v0.27.0: e769c9462d1b: Extracting [==========> ] 2.785MB/13.64MB" Apr 27 02:46:41 minikube cri-dockerd[1121]: time="2024-04-27T02:46:41Z" level=info msg="Pulling image quay.io/prometheus/alertmanager:v0.27.0: e769c9462d1b: Extracting [======================> ] 6.226MB/13.64MB" Apr 27 02:46:51 minikube cri-dockerd[1121]: time="2024-04-27T02:46:51Z" level=info msg="Pulling image quay.io/prometheus/alertmanager:v0.27.0: e769c9462d1b: Extracting [=============================================> ] 12.45MB/13.64MB" Apr 27 02:47:01 minikube cri-dockerd[1121]: time="2024-04-27T02:47:01Z" level=info msg="Pulling image quay.io/prometheus/alertmanager:v0.27.0: 697c2f9afe54: Extracting [=======> ] 2.556MB/17.5MB" Apr 27 02:47:11 minikube cri-dockerd[1121]: time="2024-04-27T02:47:11Z" level=info msg="Pulling image quay.io/prometheus/alertmanager:v0.27.0: 697c2f9afe54: Extracting [================> ] 5.702MB/17.5MB" Apr 27 02:47:21 minikube cri-dockerd[1121]: time="2024-04-27T02:47:21Z" level=info msg="Pulling image quay.io/prometheus/alertmanager:v0.27.0: 697c2f9afe54: Extracting [========================> ] 8.651MB/17.5MB" Apr 27 02:47:31 minikube cri-dockerd[1121]: time="2024-04-27T02:47:31Z" level=info msg="Pulling image quay.io/prometheus/alertmanager:v0.27.0: 697c2f9afe54: Extracting [===============================================> ] 16.71MB/17.5MB" Apr 27 02:47:41 minikube cri-dockerd[1121]: time="2024-04-27T02:47:41Z" level=info msg="Pulling image quay.io/prometheus/alertmanager:v0.27.0: 5c5751833485: Pull complete " Apr 27 02:47:43 minikube cri-dockerd[1121]: time="2024-04-27T02:47:43Z" level=info msg="Stop pulling image quay.io/prometheus/alertmanager:v0.27.0: Status: Downloaded newer image for quay.io/prometheus/alertmanager:v0.27.0" Apr 27 02:47:59 minikube cri-dockerd[1121]: time="2024-04-27T02:47:59Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: b706854df8e9: Downloading [============> ] 1.067MB/4.333MB" Apr 27 02:48:09 minikube cri-dockerd[1121]: time="2024-04-27T02:48:09Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 4abcf2066143: Extracting [=====================> ] 1.442MB/3.409MB" Apr 27 02:48:19 minikube cri-dockerd[1121]: time="2024-04-27T02:48:19Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: c8f00c165a37: Downloading [============> ] 12.21MB/50.16MB" Apr 27 02:48:29 minikube cri-dockerd[1121]: time="2024-04-27T02:48:29Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Downloading [========================> ] 28.35MB/57.54MB" Apr 27 02:48:39 minikube cri-dockerd[1121]: time="2024-04-27T02:48:39Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: bf064887a9c7: Extracting [===============> ] 950.3kB/3.16MB" Apr 27 02:48:49 minikube cri-dockerd[1121]: time="2024-04-27T02:48:49Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Downloading [===========================================> ] 49.81MB/57.54MB" Apr 27 02:49:00 minikube cri-dockerd[1121]: time="2024-04-27T02:49:00Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: c8f00c165a37: Downloading [=================================================> ] 49.89MB/50.16MB" Apr 27 02:49:09 minikube cri-dockerd[1121]: time="2024-04-27T02:49:09Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: bf064887a9c7: Extracting [==================================================>] 3.16MB/3.16MB" Apr 27 02:49:19 minikube cri-dockerd[1121]: time="2024-04-27T02:49:19Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: b706854df8e9: Extracting [> ] 65.54kB/4.333MB" Apr 27 02:49:29 minikube cri-dockerd[1121]: time="2024-04-27T02:49:29Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: b706854df8e9: Extracting [===============================================> ] 4.129MB/4.333MB" Apr 27 02:49:39 minikube cri-dockerd[1121]: time="2024-04-27T02:49:39Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [> ] 1.114MB/57.54MB" Apr 27 02:49:49 minikube cri-dockerd[1121]: time="2024-04-27T02:49:49Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [==> ] 3.342MB/57.54MB" Apr 27 02:49:59 minikube cri-dockerd[1121]: time="2024-04-27T02:49:59Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [======> ] 7.799MB/57.54MB" Apr 27 02:50:09 minikube cri-dockerd[1121]: time="2024-04-27T02:50:09Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [========> ] 10.03MB/57.54MB" Apr 27 02:50:19 minikube cri-dockerd[1121]: time="2024-04-27T02:50:19Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [=========> ] 10.58MB/57.54MB" Apr 27 02:50:29 minikube cri-dockerd[1121]: time="2024-04-27T02:50:29Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [===========> ] 12.81MB/57.54MB" Apr 27 02:50:39 minikube cri-dockerd[1121]: time="2024-04-27T02:50:39Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [================> ] 18.94MB/57.54MB" Apr 27 02:50:49 minikube cri-dockerd[1121]: time="2024-04-27T02:50:49Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [===================> ] 22.28MB/57.54MB" Apr 27 02:50:59 minikube cri-dockerd[1121]: time="2024-04-27T02:50:59Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [======================> ] 25.62MB/57.54MB" Apr 27 02:51:09 minikube cri-dockerd[1121]: time="2024-04-27T02:51:09Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [=========================> ] 29.52MB/57.54MB" Apr 27 02:51:19 minikube cri-dockerd[1121]: time="2024-04-27T02:51:19Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [============================> ] 32.87MB/57.54MB" Apr 27 02:51:29 minikube cri-dockerd[1121]: time="2024-04-27T02:51:29Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [===============================> ] 36.77MB/57.54MB" Apr 27 02:51:39 minikube cri-dockerd[1121]: time="2024-04-27T02:51:39Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [==================================> ] 39.55MB/57.54MB" Apr 27 02:51:49 minikube cri-dockerd[1121]: time="2024-04-27T02:51:49Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [=====================================> ] 42.89MB/57.54MB" Apr 27 02:51:59 minikube cri-dockerd[1121]: time="2024-04-27T02:51:59Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [========================================> ] 46.79MB/57.54MB" Apr 27 02:52:09 minikube cri-dockerd[1121]: time="2024-04-27T02:52:09Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [===========================================> ] 49.58MB/57.54MB" Apr 27 02:52:19 minikube cri-dockerd[1121]: time="2024-04-27T02:52:19Z" level=info msg="Pulling image docker.io/grafana/grafana:10.4.1: 3e634c55be4d: Extracting [=============================================> ] 52.92MB/57.54MB" * * ==> container status <== * CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 939b1d74f914 quay.io/prometheus/alertmanager "/bin/alertmanager -…" 4 minutes ago Up 4 minutes k8s_alertmanager_prometheus-alertmanager-0_default_de496f5c-f904-448e-aa42-8ae7d081cbbb_0 ddb274371a7f quay.io/prometheus-operator/prometheus-config-reloader "/bin/prometheus-con…" 6 minutes ago Up 6 minutes k8s_prometheus-server-configmap-reload_prometheus-server-579dc9cfdf-jz9x9_default_a130932b-2418-494b-ba11-5b972d76228f_0 3ce001a64641 quay.io/prometheus/pushgateway "/bin/pushgateway" 7 minutes ago Up 7 minutes k8s_pushgateway_prometheus-prometheus-pushgateway-568fbf799-qlhwx_default_c6a62de5-75f7-4b1b-be1b-e977115e841c_0 d9f60fd09f1d registry.k8s.io/pause:3.9 "/pause" 8 minutes ago Up 8 minutes k8s_POD_grafana-6f756986c7-z2c7z_default_843c25de-bed7-4a4a-a384-f931d760a636_0 16c8ce6f310f registry.k8s.io/kube-state-metrics/kube-state-metrics "/kube-state-metrics…" 8 minutes ago Up 8 minutes k8s_kube-state-metrics_prometheus-kube-state-metrics-6b7d7b9bd9-shbt6_default_27904037-4b0a-4d90-bbbf-328c72d6e434_0 306b1382b5da quay.io/prometheus/node-exporter "/bin/node_exporter …" 10 minutes ago Up 10 minutes k8s_node-exporter_prometheus-prometheus-node-exporter-rmfcg_default_e0460706-bb9a-4bc0-8180-402a06213d99_0 b31e7b34784e registry.k8s.io/pause:3.9 "/pause" 13 minutes ago Up 13 minutes k8s_POD_prometheus-alertmanager-0_default_de496f5c-f904-448e-aa42-8ae7d081cbbb_0 efb2e8b27b6a registry.k8s.io/pause:3.9 "/pause" 13 minutes ago Up 13 minutes k8s_POD_prometheus-prometheus-pushgateway-568fbf799-qlhwx_default_c6a62de5-75f7-4b1b-be1b-e977115e841c_0 9721815b550d registry.k8s.io/pause:3.9 "/pause" 13 minutes ago Up 13 minutes k8s_POD_prometheus-server-579dc9cfdf-jz9x9_default_a130932b-2418-494b-ba11-5b972d76228f_0 655bbd5fc6ae registry.k8s.io/pause:3.9 "/pause" 13 minutes ago Up 13 minutes k8s_POD_prometheus-kube-state-metrics-6b7d7b9bd9-shbt6_default_27904037-4b0a-4d90-bbbf-328c72d6e434_0 a1c7eaff7192 registry.k8s.io/pause:3.9 "/pause" 13 minutes ago Up 13 minutes k8s_POD_prometheus-prometheus-node-exporter-rmfcg_default_e0460706-bb9a-4bc0-8180-402a06213d99_0 1a86c17b1419 6e38f40d628d "/storage-provisioner" 25 minutes ago Up 25 minutes k8s_storage-provisioner_storage-provisioner_kube-system_4eca0b32-4f34-4a7b-839a-2643f3c8dafe_2 c73337051432 6e38f40d628d "/storage-provisioner" 27 minutes ago Exited (1) 27 minutes ago k8s_storage-provisioner_storage-provisioner_kube-system_4eca0b32-4f34-4a7b-839a-2643f3c8dafe_1 bfa0a403be0e ead0a4a53df8 "/coredns -conf /etc…" 27 minutes ago Up 27 minutes k8s_coredns_coredns-5dd5756b68-swq5c_kube-system_b5551d7f-7040-42a6-8dcd-2ca91d12b367_1 161a3e514a15 bfc896cf80fb "/usr/local/bin/kube…" 27 minutes ago Up 27 minutes k8s_kube-proxy_kube-proxy-z5fkd_kube-system_124f7a28-7df6-4b11-a0ab-b92008d9c32e_1 f6536c32b376 10baa1ca1706 "kube-controller-man…" 28 minutes ago Up 28 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_7da72fc2e2cfb27aacf6cffd1c72da00_3 c071b4c1cdd4 registry.k8s.io/pause:3.9 "/pause" 28 minutes ago Up 28 minutes k8s_POD_coredns-5dd5756b68-swq5c_kube-system_b5551d7f-7040-42a6-8dcd-2ca91d12b367_1 7480f707f81c registry.k8s.io/pause:3.9 "/pause" 28 minutes ago Up 28 minutes k8s_POD_kube-proxy-z5fkd_kube-system_124f7a28-7df6-4b11-a0ab-b92008d9c32e_1 23cb699e417e registry.k8s.io/pause:3.9 "/pause" 28 minutes ago Up 28 minutes k8s_POD_storage-provisioner_kube-system_4eca0b32-4f34-4a7b-839a-2643f3c8dafe_1 a7c30ea775a8 6d1b4fd1b182 "kube-scheduler --au…" 29 minutes ago Up 28 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_75ac196d3709dde303d8a81c035c2c28_1 32e924113141 537434729123 "kube-apiserver --ad…" 29 minutes ago Up 28 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_55b4bbe24dac3803a7379f9ae169d6ba_1 5f82d367b76b 10baa1ca1706 "kube-controller-man…" 29 minutes ago Exited (1) 28 minutes ago k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_7da72fc2e2cfb27aacf6cffd1c72da00_2 1bbe5b199f8c 73deb9a3f702 "etcd --advertise-cl…" 29 minutes ago Up 28 minutes k8s_etcd_etcd-minikube_kube-system_9aac5b5c8815def09a2ef9e37b89da55_1 4eb47b630404 registry.k8s.io/pause:3.9 "/pause" 29 minutes ago Up 29 minutes k8s_POD_kube-scheduler-minikube_kube-system_75ac196d3709dde303d8a81c035c2c28_1 ec8738ba5235 registry.k8s.io/pause:3.9 "/pause" 29 minutes ago Up 29 minutes k8s_POD_kube-controller-manager-minikube_kube-system_7da72fc2e2cfb27aacf6cffd1c72da00_1 d785be2a982a registry.k8s.io/pause:3.9 "/pause" 29 minutes ago Up 29 minutes k8s_POD_kube-apiserver-minikube_kube-system_55b4bbe24dac3803a7379f9ae169d6ba_1 d029c63979c7 registry.k8s.io/pause:3.9 "/pause" 29 minutes ago Up 29 minutes k8s_POD_etcd-minikube_kube-system_9aac5b5c8815def09a2ef9e37b89da55_1 b1e971b66fb2 ead0a4a53df8 "/coredns -conf /etc…" 45 minutes ago Exited (0) 30 minutes ago k8s_coredns_coredns-5dd5756b68-swq5c_kube-system_b5551d7f-7040-42a6-8dcd-2ca91d12b367_0 c07f5aedabbf bfc896cf80fb "/usr/local/bin/kube…" 45 minutes ago Exited (2) 30 minutes ago k8s_kube-proxy_kube-proxy-z5fkd_kube-system_124f7a28-7df6-4b11-a0ab-b92008d9c32e_0 b954d7913f2d registry.k8s.io/pause:3.9 "/pause" 45 minutes ago Exited (0) 30 minutes ago k8s_POD_coredns-5dd5756b68-swq5c_kube-system_b5551d7f-7040-42a6-8dcd-2ca91d12b367_0 d25c9a7433ac registry.k8s.io/pause:3.9 "/pause" 45 minutes ago Exited (0) 30 minutes ago k8s_POD_kube-proxy-z5fkd_kube-system_124f7a28-7df6-4b11-a0ab-b92008d9c32e_0 116834c2dbca 6d1b4fd1b182 "kube-scheduler --au…" 47 minutes ago Exited (1) 30 minutes ago k8s_kube-scheduler_kube-scheduler-minikube_kube-system_75ac196d3709dde303d8a81c035c2c28_0 031a421d02e5 537434729123 "kube-apiserver --ad…" 47 minutes ago Exited (255) 30 minutes ago k8s_kube-apiserver_kube-apiserver-minikube_kube-system_55b4bbe24dac3803a7379f9ae169d6ba_0 5e979886135b 73deb9a3f702 "etcd --advertise-cl…" 47 minutes ago Exited (0) 30 minutes ago k8s_etcd_etcd-minikube_kube-system_9aac5b5c8815def09a2ef9e37b89da55_0 632bfebdfad9 registry.k8s.io/pause:3.9 "/pause" 47 minutes ago Exited (0) 30 minutes ago k8s_POD_kube-scheduler-minikube_kube-system_75ac196d3709dde303d8a81c035c2c28_0 9a332c5b1e30 registry.k8s.io/pause:3.9 "/pause" 47 minutes ago Exited (0) 30 minutes ago k8s_POD_kube-apiserver-minikube_kube-system_55b4bbe24dac3803a7379f9ae169d6ba_0 ce8ffd913ace registry.k8s.io/pause:3.9 "/pause" 47 minutes ago Exited (0) 30 minutes ago k8s_POD_etcd-minikube_kube-system_9aac5b5c8815def09a2ef9e37b89da55_0 time="2024-04-27T02:52:23Z" level=fatal msg="unable to determine runtime API version: rpc error: code = DeadlineExceeded desc = context deadline exceeded" * * ==> coredns [b1e971b66fb2] <== * [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server .:53 [INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908 CoreDNS-1.10.1 linux/amd64, go1.20, 055b2c3 [INFO] Reloading [INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86 [INFO] Reloading complete [INFO] 127.0.0.1:56905 - 33913 "HINFO IN 6186250793942146901.6445217567182944135. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.209983528s [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s * * ==> coredns [bfa0a403be0e] <== * [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server .:53 [INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86 CoreDNS-1.10.1 linux/amd64, go1.20, 055b2c3 [INFO] 127.0.0.1:36633 - 20880 "HINFO IN 5015380512031166642.6655701442997652719. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.197403684s * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_04_27T02_06_19_0700 minikube.k8s.io/version=v1.32.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 27 Apr 2024 02:05:33 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sat, 27 Apr 2024 02:52:30 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 27 Apr 2024 02:48:04 +0000 Sat, 27 Apr 2024 02:24:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 27 Apr 2024 02:48:04 +0000 Sat, 27 Apr 2024 02:24:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 27 Apr 2024 02:48:04 +0000 Sat, 27 Apr 2024 02:24:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 27 Apr 2024 02:48:04 +0000 Sat, 27 Apr 2024 02:24:05 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 6 ephemeral-storage: 40581564Ki hugepages-2Mi: 0 memory: 6289076Ki pods: 110 Allocatable: cpu: 6 ephemeral-storage: 40581564Ki hugepages-2Mi: 0 memory: 6289076Ki pods: 110 System Info: Machine ID: 191cc330d1bf46cf9408ae7283522698 System UUID: ff1817b2-8882-4273-a6be-96d9dc0ca632 Boot ID: b01d58cd-08da-4022-b2fa-87864d36dfae Kernel Version: 4.15.0-213-generic OS Image: Ubuntu 22.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://24.0.7 Kubelet Version: v1.28.3 Kube-Proxy Version: v1.28.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (13 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default grafana-6f756986c7-z2c7z 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m49s default prometheus-alertmanager-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 14m default prometheus-kube-state-metrics-6b7d7b9bd9-shbt6 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 14m default prometheus-prometheus-node-exporter-rmfcg 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 14m default prometheus-prometheus-pushgateway-568fbf799-qlhwx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 14m default prometheus-server-579dc9cfdf-jz9x9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 14m kube-system coredns-5dd5756b68-swq5c 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (2%!)(MISSING) 45m kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 46m kube-system kube-apiserver-minikube 250m (4%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 46m kube-system kube-controller-manager-minikube 200m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 46m kube-system kube-proxy-z5fkd 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 45m kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 46m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 45m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (12%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (2%!)(MISSING) 170Mi (2%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 27m kube-proxy Normal Starting 45m kube-proxy Normal NodeAllocatableEnforced 47m kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 47m (x8 over 47m) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 47m (x8 over 47m) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 47m (x7 over 47m) kubelet Node minikube status is now: NodeHasSufficientPID Normal Starting 46m kubelet Starting kubelet. Normal NodeHasSufficientPID 46m kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 46m kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 46m kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeNotReady 46m kubelet Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 46m kubelet Updated Node Allocatable limit across pods Normal NodeReady 46m kubelet Node minikube status is now: NodeReady Normal RegisteredNode 46m node-controller Node minikube event: Registered Node minikube in Controller Normal NodeNotReady 30m node-controller Node minikube status is now: NodeNotReady Normal Starting 29m kubelet Starting kubelet. Normal NodeAllocatableEnforced 29m kubelet Updated Node Allocatable limit across pods Normal NodeHasNoDiskPressure 29m (x8 over 29m) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 29m (x7 over 29m) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 29m (x8 over 29m) kubelet Node minikube status is now: NodeHasSufficientMemory Normal RegisteredNode 27m node-controller Node minikube event: Registered Node minikube in Controller * * ==> dmesg <== * [ +0.000001] tick_sched_timer+0x39/0x80 [ +0.000000] __hrtimer_run_queues+0xdf/0x230 [ +0.000048] hrtimer_interrupt+0x97/0x180 [ +0.000001] smp_apic_timer_interrupt+0x6f/0x140 [ +0.000001] apic_timer_interrupt+0x90/0xa0 [ +0.000000] [ +0.000001] RIP: 0033:0x405d91 [ +0.000000] RSP: 002b:000000c0004e0670 EFLAGS: 00010246 ORIG_RAX: ffffffffffffff11 [ +0.000002] RAX: 000000c000274120 RBX: 000000c0004e0740 RCX: 0000000000000000 [ +0.000001] RDX: 000000c00007e090 RSI: 0000000000000000 RDI: 0000000000873985 [ +0.000001] RBP: 000000c0004e06e8 R08: 0000000000000005 R09: 0000000000000005 [ +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [ +0.000001] R13: 0000000000000000 R14: 000000c0006651e0 R15: 0000000000000001 [ +0.000000] Code: 41 bd 01 00 00 00 41 be 00 01 00 00 3c 02 0f 94 c0 0f b6 c0 48 89 45 c8 41 c6 44 24 44 00 ba 00 80 00 00 c6 43 01 01 eb 0b f3 90 <83> ea 01 0f 84 08 01 00 00 0f b6 03 84 c0 75 ee 44 89 f0 f0 66 [ +0.004810] NMI backtrace for cpu 0 skipped: idling at native_halt+0x11/0x20 [ +0.000591] NMI backtrace for cpu 1 [ +0.000001] CPU: 1 PID: 10870 Comm: containerd-shim Tainted: G W 4.15.0-213-generic #224-Ubuntu [ +0.000001] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 [ +0.000000] RIP: 0033:0x43b4e9 [ +0.000001] RSP: 002b:000000c000051f18 EFLAGS: 00010202 [ +0.000001] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [ +0.000001] RDX: 000000c000003380 RSI: 000000000000002c RDI: 0000000000000000 [ +0.000001] RBP: 000000c000051f58 R08: 00007fffed514000 R09: 00000365b908af2e [ +0.000001] R10: 000000c000051ea0 R11: 0000000000000001 R12: 000000c000051eb0 [ +0.000001] R13: 000080c000146000 R14: 000000c000003380 R15: 00007efe8f6ce1a0 [ +0.000000] FS: 000000c000044c90(0000) GS:ffff9b0fab640000(0000) knlGS:0000000000000000 [ +0.000001] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ +0.000001] CR2: 00007fb6f8052118 CR3: 00000001772cc000 CR4: 00000000000006e0 [ +0.001837] NMI backtrace for cpu 3 skipped: idling at native_halt+0x11/0x20 [ +0.000003] NMI backtrace for cpu 4 [ +0.000004] CPU: 4 PID: 12799 Comm: cri-dockerd Tainted: G W 4.15.0-213-generic #224-Ubuntu [ +0.000002] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 [ +0.000000] Call Trace: [ +0.000002] [ +0.000005] dump_stack+0x6d/0x8b [ +0.000003] nmi_cpu_backtrace+0x94/0xa0 [ +0.000004] ? lapic_can_unplug_cpu+0xb0/0xb0 [ +0.000002] nmi_trigger_cpumask_backtrace+0xe7/0x130 [ +0.000003] arch_trigger_cpumask_backtrace+0x19/0x20 [ +0.000003] rcu_dump_cpu_stacks+0xa3/0xd5 [ +0.000003] rcu_check_callbacks+0x6cd/0x8e0 [ +0.000002] ? account_user_time+0x9e/0xb0 [ +0.000003] ? tick_sched_do_timer+0x50/0x50 [ +0.000002] update_process_times+0x2f/0x60 [ +0.000003] tick_sched_handle+0x26/0x70 [ +0.000002] ? tick_sched_do_timer+0x50/0x50 [ +0.000002] tick_sched_timer+0x39/0x80 [ +0.000003] __hrtimer_run_queues+0xdf/0x230 [ +0.000002] hrtimer_interrupt+0x97/0x180 [ +0.000003] smp_apic_timer_interrupt+0x6f/0x140 [ +0.000002] apic_timer_interrupt+0x90/0xa0 [ +0.000002] [ +0.000002] RIP: 0033:0x405d91 [ +0.000001] RSP: 002b:000000c0004e0670 EFLAGS: 00010246 ORIG_RAX: ffffffffffffff11 [ +0.000002] RAX: 000000c000274120 RBX: 000000c0004e0740 RCX: 0000000000000000 [ +0.000001] RDX: 000000c00007e090 RSI: 0000000000000000 RDI: 0000000000873985 [ +0.000001] RBP: 000000c0004e06e8 R08: 0000000000000005 R09: 0000000000000005 [ +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [ +0.000001] R13: 0000000000000000 R14: 000000c0006651e0 R15: 0000000000000001 [Apr27 02:21] systemd-journald[6922]: File /run/log/journal/4c1d6df72686438286491321e7c04372/system.journal corrupted or uncleanly shut down, renaming and replacing. * * ==> etcd [1bbe5b199f8c] <== * {"level":"info","ts":"2024-04-27T02:49:00.896853Z","caller":"traceutil/trace.go:171","msg":"trace[1887312031] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:2136; }","duration":"110.192822ms","start":"2024-04-27T02:49:00.786638Z","end":"2024-04-27T02:49:00.896831Z","steps":["trace[1887312031] 'range keys from in-memory index tree' (duration: 103.530718ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:49:12.578449Z","caller":"traceutil/trace.go:171","msg":"trace[814779864] transaction","detail":"{read_only:false; response_revision:2145; number_of_response:1; }","duration":"105.317338ms","start":"2024-04-27T02:49:12.47306Z","end":"2024-04-27T02:49:12.578377Z","steps":["trace[814779864] 'process raft request' (duration: 105.101516ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:49:12.996281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.244939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2024-04-27T02:49:12.996365Z","caller":"traceutil/trace.go:171","msg":"trace[1726137183] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2145; }","duration":"207.336867ms","start":"2024-04-27T02:49:12.789008Z","end":"2024-04-27T02:49:12.996345Z","steps":["trace[1726137183] 'range keys from in-memory index tree' (duration: 207.142505ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:49:15.886584Z","caller":"traceutil/trace.go:171","msg":"trace[1315147642] linearizableReadLoop","detail":"{readStateIndex:2577; appliedIndex:2576; }","duration":"100.854047ms","start":"2024-04-27T02:49:15.785669Z","end":"2024-04-27T02:49:15.886523Z","steps":["trace[1315147642] 'read index received' (duration: 13.321036ms)","trace[1315147642] 'applied index is now lower than readState.Index' (duration: 87.530767ms)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:49:15.998947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.270195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2024-04-27T02:49:15.999234Z","caller":"traceutil/trace.go:171","msg":"trace[1647639339] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:2147; }","duration":"213.561172ms","start":"2024-04-27T02:49:15.785643Z","end":"2024-04-27T02:49:15.999204Z","steps":["trace[1647639339] 'agreement among raft nodes before linearized reading' (duration: 101.02794ms)","trace[1647639339] 'count revisions from in-memory index tree' (duration: 112.207165ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:49:20.275691Z","caller":"traceutil/trace.go:171","msg":"trace[195580906] transaction","detail":"{read_only:false; response_revision:2150; number_of_response:1; }","duration":"100.652065ms","start":"2024-04-27T02:49:20.175007Z","end":"2024-04-27T02:49:20.275659Z","steps":["trace[195580906] 'process raft request' (duration: 24.83744ms)","trace[195580906] 'compare' (duration: 75.673047ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:49:23.399038Z","caller":"traceutil/trace.go:171","msg":"trace[1998589287] linearizableReadLoop","detail":"{readStateIndex:2584; appliedIndex:2584; }","duration":"123.93355ms","start":"2024-04-27T02:49:23.275054Z","end":"2024-04-27T02:49:23.398988Z","steps":["trace[1998589287] 'read index received' (duration: 123.922855ms)","trace[1998589287] 'applied index is now lower than readState.Index' (duration: 6.399µs)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:49:23.399257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.210157ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2024-04-27T02:49:23.399293Z","caller":"traceutil/trace.go:171","msg":"trace[1040075769] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:2153; }","duration":"124.267658ms","start":"2024-04-27T02:49:23.275013Z","end":"2024-04-27T02:49:23.399281Z","steps":["trace[1040075769] 'agreement among raft nodes before linearized reading' (duration: 124.099091ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:49:23.403572Z","caller":"traceutil/trace.go:171","msg":"trace[1367111764] transaction","detail":"{read_only:false; response_revision:2153; number_of_response:1; }","duration":"126.279779ms","start":"2024-04-27T02:49:23.272529Z","end":"2024-04-27T02:49:23.398808Z","steps":["trace[1367111764] 'process raft request' (duration: 126.134756ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:49:30.398204Z","caller":"traceutil/trace.go:171","msg":"trace[571988935] transaction","detail":"{read_only:false; response_revision:2157; number_of_response:1; }","duration":"102.359597ms","start":"2024-04-27T02:49:30.295797Z","end":"2024-04-27T02:49:30.398157Z","steps":["trace[571988935] 'process raft request' (duration: 102.106824ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:49:30.772065Z","caller":"traceutil/trace.go:171","msg":"trace[1687375293] transaction","detail":"{read_only:false; response_revision:2158; number_of_response:1; }","duration":"119.30541ms","start":"2024-04-27T02:49:30.652735Z","end":"2024-04-27T02:49:30.77204Z","steps":["trace[1687375293] 'process raft request' (duration: 119.071209ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:49:37.973137Z","caller":"traceutil/trace.go:171","msg":"trace[1300169907] transaction","detail":"{read_only:false; response_revision:2163; number_of_response:1; }","duration":"169.281207ms","start":"2024-04-27T02:49:37.80383Z","end":"2024-04-27T02:49:37.973111Z","steps":["trace[1300169907] 'process raft request' (duration: 169.1261ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:49:43.903713Z","caller":"traceutil/trace.go:171","msg":"trace[2095484301] transaction","detail":"{read_only:false; response_revision:2167; number_of_response:1; }","duration":"203.428912ms","start":"2024-04-27T02:49:43.700244Z","end":"2024-04-27T02:49:43.903673Z","steps":["trace[2095484301] 'process raft request' (duration: 187.87469ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:49:43.971916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.486468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2024-04-27T02:49:43.97201Z","caller":"traceutil/trace.go:171","msg":"trace[1367182002] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:2167; }","duration":"161.593848ms","start":"2024-04-27T02:49:43.810387Z","end":"2024-04-27T02:49:43.971981Z","steps":["trace[1367182002] 'agreement among raft nodes before linearized reading' (duration: 93.803119ms)","trace[1367182002] 'count revisions from in-memory index tree' (duration: 67.652215ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:49:50.274533Z","caller":"traceutil/trace.go:171","msg":"trace[1838011973] transaction","detail":"{read_only:false; response_revision:2171; number_of_response:1; }","duration":"194.740775ms","start":"2024-04-27T02:49:50.079762Z","end":"2024-04-27T02:49:50.274503Z","steps":["trace[1838011973] 'process raft request' (duration: 194.498926ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:49:50.275134Z","caller":"traceutil/trace.go:171","msg":"trace[676616952] linearizableReadLoop","detail":"{readStateIndex:2607; appliedIndex:2607; }","duration":"179.881906ms","start":"2024-04-27T02:49:50.095236Z","end":"2024-04-27T02:49:50.275118Z","steps":["trace[676616952] 'read index received' (duration: 179.874416ms)","trace[676616952] 'applied index is now lower than readState.Index' (duration: 6.008µs)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:49:50.275297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.031235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:133"} {"level":"info","ts":"2024-04-27T02:49:50.275353Z","caller":"traceutil/trace.go:171","msg":"trace[976174167] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:2171; }","duration":"180.151403ms","start":"2024-04-27T02:49:50.095187Z","end":"2024-04-27T02:49:50.275338Z","steps":["trace[976174167] 'agreement among raft nodes before linearized reading' (duration: 179.989246ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:49:54.912252Z","caller":"traceutil/trace.go:171","msg":"trace[1547027049] transaction","detail":"{read_only:false; response_revision:2175; number_of_response:1; }","duration":"128.041334ms","start":"2024-04-27T02:49:54.784175Z","end":"2024-04-27T02:49:54.912216Z","steps":["trace[1547027049] 'process raft request' (duration: 127.84566ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:49:57.481854Z","caller":"traceutil/trace.go:171","msg":"trace[2066918923] transaction","detail":"{read_only:false; response_revision:2177; number_of_response:1; }","duration":"100.802215ms","start":"2024-04-27T02:49:57.381022Z","end":"2024-04-27T02:49:57.481824Z","steps":["trace[2066918923] 'process raft request' (duration: 100.566975ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:49:57.682784Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.07241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2024-04-27T02:49:57.682881Z","caller":"traceutil/trace.go:171","msg":"trace[1562525389] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2177; }","duration":"109.184597ms","start":"2024-04-27T02:49:57.573677Z","end":"2024-04-27T02:49:57.682862Z","steps":["trace[1562525389] 'range keys from in-memory index tree' (duration: 108.938032ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:50:04.973322Z","caller":"traceutil/trace.go:171","msg":"trace[2085668499] transaction","detail":"{read_only:false; response_revision:2182; number_of_response:1; }","duration":"177.208017ms","start":"2024-04-27T02:50:04.79609Z","end":"2024-04-27T02:50:04.973298Z","steps":["trace[2085668499] 'process raft request' (duration: 176.732411ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:50:06.681642Z","caller":"traceutil/trace.go:171","msg":"trace[1935983774] transaction","detail":"{read_only:false; response_revision:2183; number_of_response:1; }","duration":"109.116639ms","start":"2024-04-27T02:50:06.572495Z","end":"2024-04-27T02:50:06.681612Z","steps":["trace[1935983774] 'process raft request' (duration: 108.826994ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:50:09.072513Z","caller":"traceutil/trace.go:171","msg":"trace[959626314] transaction","detail":"{read_only:false; response_revision:2185; number_of_response:1; }","duration":"184.563119ms","start":"2024-04-27T02:50:08.887915Z","end":"2024-04-27T02:50:09.072478Z","steps":["trace[959626314] 'process raft request' (duration: 184.167705ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:50:15.405669Z","caller":"traceutil/trace.go:171","msg":"trace[311987226] transaction","detail":"{read_only:false; response_revision:2189; number_of_response:1; }","duration":"101.190428ms","start":"2024-04-27T02:50:15.304448Z","end":"2024-04-27T02:50:15.405638Z","steps":["trace[311987226] 'process raft request' (duration: 71.291448ms)","trace[311987226] 'compare' (duration: 11.461887ms)","trace[311987226] 'store kv pair into bolt db' {req_type:put; key:/registry/leases/kube-node-lease/minikube; req_size:518; } (duration: 18.33534ms)"],"step_count":3} {"level":"info","ts":"2024-04-27T02:50:20.275632Z","caller":"traceutil/trace.go:171","msg":"trace[247372317] transaction","detail":"{read_only:false; response_revision:2193; number_of_response:1; }","duration":"156.442745ms","start":"2024-04-27T02:50:20.119164Z","end":"2024-04-27T02:50:20.275607Z","steps":["trace[247372317] 'process raft request' (duration: 156.269593ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:50:25.375192Z","caller":"traceutil/trace.go:171","msg":"trace[168461762] transaction","detail":"{read_only:false; response_revision:2196; number_of_response:1; }","duration":"102.587756ms","start":"2024-04-27T02:50:25.272557Z","end":"2024-04-27T02:50:25.375144Z","steps":["trace[168461762] 'process raft request' (duration: 21.500807ms)","trace[168461762] 'store kv pair into bolt db' {req_type:put; key:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; req_size:1090; } (duration: 80.574613ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:50:25.777705Z","caller":"traceutil/trace.go:171","msg":"trace[1314641627] transaction","detail":"{read_only:false; response_revision:2197; number_of_response:1; }","duration":"183.879283ms","start":"2024-04-27T02:50:25.593802Z","end":"2024-04-27T02:50:25.777681Z","steps":["trace[1314641627] 'process raft request' (duration: 183.751874ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:50:29.972178Z","caller":"traceutil/trace.go:171","msg":"trace[1221030070] transaction","detail":"{read_only:false; response_revision:2200; number_of_response:1; }","duration":"189.366478ms","start":"2024-04-27T02:50:29.782784Z","end":"2024-04-27T02:50:29.972151Z","steps":["trace[1221030070] 'process raft request' (duration: 189.207536ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:50:30.099066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.276896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/prometheus-server-ext\" ","response":"range_response_count:1 size:955"} {"level":"info","ts":"2024-04-27T02:50:30.099186Z","caller":"traceutil/trace.go:171","msg":"trace[1955719531] range","detail":"{range_begin:/registry/services/endpoints/default/prometheus-server-ext; range_end:; response_count:1; response_revision:2200; }","duration":"113.39308ms","start":"2024-04-27T02:50:29.985748Z","end":"2024-04-27T02:50:30.099141Z","steps":["trace[1955719531] 'range keys from in-memory index tree' (duration: 104.962563ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:50:37.586666Z","caller":"traceutil/trace.go:171","msg":"trace[243038456] transaction","detail":"{read_only:false; response_revision:2206; number_of_response:1; }","duration":"104.788547ms","start":"2024-04-27T02:50:37.481846Z","end":"2024-04-27T02:50:37.586634Z","steps":["trace[243038456] 'process raft request' (duration: 14.32695ms)","trace[243038456] 'compare' (duration: 81.999499ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:51:10.793479Z","caller":"traceutil/trace.go:171","msg":"trace[1128764527] transaction","detail":"{read_only:false; response_revision:2230; number_of_response:1; }","duration":"185.453946ms","start":"2024-04-27T02:51:10.607999Z","end":"2024-04-27T02:51:10.793453Z","steps":["trace[1128764527] 'process raft request' (duration: 164.820709ms)","trace[1128764527] 'compare' (duration: 20.516834ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:51:15.772707Z","caller":"traceutil/trace.go:171","msg":"trace[1736848564] transaction","detail":"{read_only:false; response_revision:2233; number_of_response:1; }","duration":"170.07206ms","start":"2024-04-27T02:51:15.602604Z","end":"2024-04-27T02:51:15.772676Z","steps":["trace[1736848564] 'process raft request' (duration: 169.581253ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:51:18.098272Z","caller":"traceutil/trace.go:171","msg":"trace[898959525] transaction","detail":"{read_only:false; response_revision:2235; number_of_response:1; }","duration":"107.953769ms","start":"2024-04-27T02:51:17.990279Z","end":"2024-04-27T02:51:18.098233Z","steps":["trace[898959525] 'process raft request' (duration: 92.919831ms)","trace[898959525] 'store kv pair into bolt db' {req_type:put; key:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; req_size:1090; } (duration: 10.416359ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:51:20.621235Z","caller":"traceutil/trace.go:171","msg":"trace[768812386] linearizableReadLoop","detail":"{readStateIndex:2693; appliedIndex:2692; }","duration":"227.532542ms","start":"2024-04-27T02:51:20.39367Z","end":"2024-04-27T02:51:20.621202Z","steps":["trace[768812386] 'read index received' (duration: 132.029651ms)","trace[768812386] 'applied index is now lower than readState.Index' (duration: 95.501318ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:51:20.621465Z","caller":"traceutil/trace.go:171","msg":"trace[1840884187] transaction","detail":"{read_only:false; response_revision:2238; number_of_response:1; }","duration":"243.631211ms","start":"2024-04-27T02:51:20.377807Z","end":"2024-04-27T02:51:20.621438Z","steps":["trace[1840884187] 'process raft request' (duration: 147.972381ms)","trace[1840884187] 'compare' (duration: 95.248114ms)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:51:20.62154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.883593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"} {"level":"info","ts":"2024-04-27T02:51:20.621585Z","caller":"traceutil/trace.go:171","msg":"trace[478451717] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:2238; }","duration":"227.948004ms","start":"2024-04-27T02:51:20.393625Z","end":"2024-04-27T02:51:20.621573Z","steps":["trace[478451717] 'agreement among raft nodes before linearized reading' (duration: 227.833433ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:51:27.373217Z","caller":"traceutil/trace.go:171","msg":"trace[548822564] transaction","detail":"{read_only:false; response_revision:2241; number_of_response:1; }","duration":"100.567766ms","start":"2024-04-27T02:51:27.272622Z","end":"2024-04-27T02:51:27.37319Z","steps":["trace[548822564] 'process raft request' (duration: 100.397237ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:51:31.072445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.166856ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:133"} {"level":"warn","ts":"2024-04-27T02:51:31.072583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.944155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2024-04-27T02:51:31.072646Z","caller":"traceutil/trace.go:171","msg":"trace[826007426] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2245; }","duration":"170.005722ms","start":"2024-04-27T02:51:30.902622Z","end":"2024-04-27T02:51:31.072627Z","steps":["trace[826007426] 'range keys from in-memory index tree' (duration: 169.843145ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:51:31.073112Z","caller":"traceutil/trace.go:171","msg":"trace[466744621] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:2245; }","duration":"100.893351ms","start":"2024-04-27T02:51:30.972199Z","end":"2024-04-27T02:51:31.073093Z","steps":["trace[466744621] 'range keys from in-memory index tree' (duration: 99.967688ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:51:50.69662Z","caller":"traceutil/trace.go:171","msg":"trace[1715166073] transaction","detail":"{read_only:false; response_revision:2259; number_of_response:1; }","duration":"116.091699ms","start":"2024-04-27T02:51:50.580499Z","end":"2024-04-27T02:51:50.696591Z","steps":["trace[1715166073] 'process raft request' (duration: 93.976935ms)","trace[1715166073] 'compare' (duration: 21.934488ms)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:51:50.999321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.733952ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"} {"level":"info","ts":"2024-04-27T02:51:50.999395Z","caller":"traceutil/trace.go:171","msg":"trace[1179488414] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:2260; }","duration":"125.822637ms","start":"2024-04-27T02:51:50.873554Z","end":"2024-04-27T02:51:50.999377Z","steps":["trace[1179488414] 'range keys from in-memory index tree' (duration: 125.566368ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:52:00.602944Z","caller":"traceutil/trace.go:171","msg":"trace[923068927] transaction","detail":"{read_only:false; response_revision:2266; number_of_response:1; }","duration":"106.995457ms","start":"2024-04-27T02:52:00.495929Z","end":"2024-04-27T02:52:00.602925Z","steps":["trace[923068927] 'process raft request' (duration: 83.636411ms)","trace[923068927] 'compare' (duration: 23.171496ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:52:11.072612Z","caller":"traceutil/trace.go:171","msg":"trace[1913963940] transaction","detail":"{read_only:false; response_revision:2275; number_of_response:1; }","duration":"162.822634ms","start":"2024-04-27T02:52:10.909756Z","end":"2024-04-27T02:52:11.072579Z","steps":["trace[1913963940] 'process raft request' (duration: 162.571813ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:52:21.590073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.371864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2024-04-27T02:52:21.590159Z","caller":"traceutil/trace.go:171","msg":"trace[510848482] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:2282; }","duration":"116.473187ms","start":"2024-04-27T02:52:21.473666Z","end":"2024-04-27T02:52:21.590139Z","steps":["trace[510848482] 'agreement among raft nodes before linearized reading' (duration: 24.982435ms)","trace[510848482] 'count revisions from in-memory index tree' (duration: 91.368469ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:52:26.401981Z","caller":"traceutil/trace.go:171","msg":"trace[1249268856] linearizableReadLoop","detail":"{readStateIndex:2753; appliedIndex:2752; }","duration":"126.768076ms","start":"2024-04-27T02:52:26.275185Z","end":"2024-04-27T02:52:26.401953Z","steps":["trace[1249268856] 'read index received' (duration: 104.522505ms)","trace[1249268856] 'applied index is now lower than readState.Index' (duration: 22.243346ms)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:52:26.402108Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.931046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2024-04-27T02:52:26.402151Z","caller":"traceutil/trace.go:171","msg":"trace[243959368] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2285; }","duration":"127.000533ms","start":"2024-04-27T02:52:26.275141Z","end":"2024-04-27T02:52:26.402142Z","steps":["trace[243959368] 'agreement among raft nodes before linearized reading' (duration: 126.891231ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:52:26.402373Z","caller":"traceutil/trace.go:171","msg":"trace[1522710604] transaction","detail":"{read_only:false; response_revision:2285; number_of_response:1; }","duration":"229.716891ms","start":"2024-04-27T02:52:26.172597Z","end":"2024-04-27T02:52:26.402314Z","steps":["trace[1522710604] 'process raft request' (duration: 207.189727ms)","trace[1522710604] 'compare' (duration: 21.980676ms)"],"step_count":2} * * ==> etcd [5e979886135b] <== * {"level":"warn","ts":"2024-04-27T02:21:38.494869Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"799.354476ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2024-04-27T02:21:38.494903Z","caller":"traceutil/trace.go:171","msg":"trace[1961182520] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:734; }","duration":"799.392476ms","start":"2024-04-27T02:21:37.695502Z","end":"2024-04-27T02:21:38.494894Z","steps":["trace[1961182520] 'agreement among raft nodes before linearized reading' (duration: 799.317035ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:21:38.494931Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:37.695493Z","time spent":"799.430468ms","remote":"127.0.0.1:34310","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":13,"response size":31,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true "} {"level":"warn","ts":"2024-04-27T02:21:38.495145Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"799.945473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2024-04-27T02:21:38.495177Z","caller":"traceutil/trace.go:171","msg":"trace[479618815] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:0; response_revision:734; }","duration":"799.972728ms","start":"2024-04-27T02:21:37.695191Z","end":"2024-04-27T02:21:38.495164Z","steps":["trace[479618815] 'agreement among raft nodes before linearized reading' (duration: 799.927188ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:21:38.495202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:37.69518Z","time spent":"800.016924ms","remote":"127.0.0.1:43488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":29,"request content":"key:\"/registry/masterleases/192.168.49.2\" "} {"level":"warn","ts":"2024-04-27T02:21:38.497211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"804.768008ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2024-04-27T02:21:38.497267Z","caller":"traceutil/trace.go:171","msg":"trace[2090089365] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:734; }","duration":"806.858304ms","start":"2024-04-27T02:21:37.690394Z","end":"2024-04-27T02:21:38.497252Z","steps":["trace[2090089365] 'agreement among raft nodes before linearized reading' (duration: 804.749566ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:21:38.575555Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"885.286494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2024-04-27T02:21:38.575603Z","caller":"traceutil/trace.go:171","msg":"trace[1787672699] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:734; }","duration":"885.34359ms","start":"2024-04-27T02:21:37.690247Z","end":"2024-04-27T02:21:38.57559Z","steps":["trace[1787672699] 'agreement among raft nodes before linearized reading' (duration: 885.195715ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:21:38.575636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:37.690227Z","time spent":"885.400778ms","remote":"127.0.0.1:43596","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":29,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "} {"level":"warn","ts":"2024-04-27T02:21:38.497316Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:37.690382Z","time spent":"806.916583ms","remote":"127.0.0.1:43602","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true "} {"level":"warn","ts":"2024-04-27T02:21:38.587652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"901.230397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"} {"level":"info","ts":"2024-04-27T02:21:38.587705Z","caller":"traceutil/trace.go:171","msg":"trace[648173476] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:734; }","duration":"901.282513ms","start":"2024-04-27T02:21:37.686405Z","end":"2024-04-27T02:21:38.587688Z","steps":["trace[648173476] 'agreement among raft nodes before linearized reading' (duration: 715.299669ms)","trace[648173476] 'range keys from in-memory index tree' (duration: 185.892567ms)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:21:38.587746Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:37.686387Z","time spent":"901.348571ms","remote":"127.0.0.1:43518","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":142,"request content":"key:\"/registry/ranges/servicenodeports\" "} {"level":"warn","ts":"2024-04-27T02:21:38.591181Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:37.692554Z","time spent":"898.603085ms","remote":"127.0.0.1:34268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":31,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true "} {"level":"warn","ts":"2024-04-27T02:21:38.796302Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.101623464s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2024-04-27T02:21:38.796379Z","caller":"traceutil/trace.go:171","msg":"trace[1835115062] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:734; }","duration":"1.101715946s","start":"2024-04-27T02:21:37.694643Z","end":"2024-04-27T02:21:38.796359Z","steps":["trace[1835115062] 'agreement among raft nodes before linearized reading' (duration: 881.265057ms)","trace[1835115062] 'count revisions from in-memory index tree' (duration: 220.331452ms)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:21:38.79643Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:37.694632Z","time spent":"1.101782686s","remote":"127.0.0.1:43632","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":31,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true "} {"level":"info","ts":"2024-04-27T02:21:39.708894Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":595} {"level":"info","ts":"2024-04-27T02:21:39.719127Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":595,"took":"9.738641ms","hash":2687879114} {"level":"info","ts":"2024-04-27T02:21:39.719173Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2687879114,"revision":595,"compact-revision":-1} {"level":"warn","ts":"2024-04-27T02:21:39.820162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.380899ms","expected-duration":"100ms","prefix":"","request":"header: compaction: ","response":"size:5"} {"level":"info","ts":"2024-04-27T02:21:39.820307Z","caller":"traceutil/trace.go:171","msg":"trace[57246011] linearizableReadLoop","detail":"{readStateIndex:830; appliedIndex:829; }","duration":"130.439022ms","start":"2024-04-27T02:21:39.689843Z","end":"2024-04-27T02:21:39.820282Z","steps":["trace[57246011] 'read index received' (duration: 54.651µs)","trace[57246011] 'applied index is now lower than readState.Index' (duration: 130.381986ms)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:21:39.979238Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.408793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"} {"level":"info","ts":"2024-04-27T02:21:39.979331Z","caller":"traceutil/trace.go:171","msg":"trace[433288265] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:734; }","duration":"289.517825ms","start":"2024-04-27T02:21:39.689791Z","end":"2024-04-27T02:21:39.979309Z","steps":["trace[433288265] 'agreement among raft nodes before linearized reading' (duration: 188.897713ms)","trace[433288265] 'range keys from in-memory index tree' (duration: 100.432301ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:21:39.97988Z","caller":"traceutil/trace.go:171","msg":"trace[2133557282] compact","detail":"{revision:595; response_revision:734; }","duration":"494.838509ms","start":"2024-04-27T02:21:39.485026Z","end":"2024-04-27T02:21:39.979865Z","steps":["trace[2133557282] 'process raft request' (duration: 87.859525ms)","trace[2133557282] 'check and update compact revision' (duration: 114.915863ms)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:21:39.979923Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:39.485014Z","time spent":"494.904547ms","remote":"127.0.0.1:43474","response type":"/etcdserverpb.KV/Compact","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} {"level":"warn","ts":"2024-04-27T02:21:39.980221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.878943ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:16"} {"level":"info","ts":"2024-04-27T02:21:39.980286Z","caller":"traceutil/trace.go:171","msg":"trace[1283557649] linearizableReadLoop","detail":"{readStateIndex:831; appliedIndex:830; }","duration":"100.900138ms","start":"2024-04-27T02:21:39.87937Z","end":"2024-04-27T02:21:39.980271Z","steps":["trace[1283557649] 'read index received' (duration: 5.512795ms)","trace[1283557649] 'applied index is now lower than readState.Index' (duration: 95.385759ms)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:21:39.980818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.05372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" ","response":"range_response_count:2 size:1908"} {"level":"info","ts":"2024-04-27T02:21:39.980865Z","caller":"traceutil/trace.go:171","msg":"trace[159834205] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:735; }","duration":"281.104923ms","start":"2024-04-27T02:21:39.699748Z","end":"2024-04-27T02:21:39.980853Z","steps":["trace[159834205] 'agreement among raft nodes before linearized reading' (duration: 280.562763ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:21:39.981195Z","caller":"traceutil/trace.go:171","msg":"trace[1056075126] transaction","detail":"{read_only:false; response_revision:735; number_of_response:1; }","duration":"281.254321ms","start":"2024-04-27T02:21:39.699925Z","end":"2024-04-27T02:21:39.981179Z","steps":["trace[1056075126] 'process raft request' (duration: 120.345247ms)","trace[1056075126] 'compare' (duration: 159.314148ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:21:40.181811Z","caller":"traceutil/trace.go:171","msg":"trace[815409303] linearizableReadLoop","detail":"{readStateIndex:834; appliedIndex:833; }","duration":"176.073569ms","start":"2024-04-27T02:21:40.00565Z","end":"2024-04-27T02:21:40.181724Z","steps":["trace[815409303] 'read index received' (duration: 121.52413ms)","trace[815409303] 'applied index is now lower than readState.Index' (duration: 54.547444ms)"],"step_count":2} {"level":"warn","ts":"2024-04-27T02:21:40.181948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.556509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"} {"level":"info","ts":"2024-04-27T02:21:40.181982Z","caller":"traceutil/trace.go:171","msg":"trace[2071095087] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:738; }","duration":"180.601967ms","start":"2024-04-27T02:21:40.001369Z","end":"2024-04-27T02:21:40.181971Z","steps":["trace[2071095087] 'agreement among raft nodes before linearized reading' (duration: 180.504805ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:21:40.411892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.953859ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure:<>>","response":"size:16"} {"level":"info","ts":"2024-04-27T02:21:40.41252Z","caller":"traceutil/trace.go:171","msg":"trace[410182353] transaction","detail":"{read_only:false; response_revision:740; number_of_response:1; }","duration":"219.333948ms","start":"2024-04-27T02:21:40.193134Z","end":"2024-04-27T02:21:40.412467Z","steps":["trace[410182353] 'process raft request' (duration: 89.71621ms)","trace[410182353] 'compare' (duration: 128.409062ms)"],"step_count":2} {"level":"info","ts":"2024-04-27T02:21:40.7737Z","caller":"traceutil/trace.go:171","msg":"trace[1012186305] transaction","detail":"{read_only:false; response_revision:741; number_of_response:1; }","duration":"283.650409ms","start":"2024-04-27T02:21:40.490027Z","end":"2024-04-27T02:21:40.773678Z","steps":["trace[1012186305] 'process raft request' (duration: 283.47544ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:21:40.87823Z","caller":"traceutil/trace.go:171","msg":"trace[540945646] transaction","detail":"{read_only:false; response_revision:743; number_of_response:1; }","duration":"288.886831ms","start":"2024-04-27T02:21:40.589319Z","end":"2024-04-27T02:21:40.878206Z","steps":["trace[540945646] 'process raft request' (duration: 288.764477ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:21:40.878593Z","caller":"traceutil/trace.go:171","msg":"trace[1153377623] transaction","detail":"{read_only:false; response_revision:742; number_of_response:1; }","duration":"294.815112ms","start":"2024-04-27T02:21:40.583762Z","end":"2024-04-27T02:21:40.878577Z","steps":["trace[1153377623] 'process raft request' (duration: 288.436317ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:21:41.293094Z","caller":"traceutil/trace.go:171","msg":"trace[1287159832] transaction","detail":"{read_only:false; response_revision:747; number_of_response:1; }","duration":"148.16198ms","start":"2024-04-27T02:21:41.144907Z","end":"2024-04-27T02:21:41.293069Z","steps":["trace[1287159832] 'process raft request' (duration: 148.083312ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:21:41.293559Z","caller":"traceutil/trace.go:171","msg":"trace[1206087360] transaction","detail":"{read_only:false; response_revision:745; number_of_response:1; }","duration":"180.366521ms","start":"2024-04-27T02:21:41.113181Z","end":"2024-04-27T02:21:41.293547Z","steps":["trace[1206087360] 'process raft request' (duration: 179.547897ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:21:41.293913Z","caller":"traceutil/trace.go:171","msg":"trace[200914754] transaction","detail":"{read_only:false; response_revision:746; number_of_response:1; }","duration":"170.72687ms","start":"2024-04-27T02:21:41.123173Z","end":"2024-04-27T02:21:41.2939Z","steps":["trace[200914754] 'process raft request' (duration: 169.747511ms)"],"step_count":1} {"level":"info","ts":"2024-04-27T02:21:41.780379Z","caller":"traceutil/trace.go:171","msg":"trace[95570726] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"470.048719ms","start":"2024-04-27T02:21:41.310306Z","end":"2024-04-27T02:21:41.780355Z","steps":["trace[95570726] 'process raft request' (duration: 469.765079ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:21:41.780613Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:41.310277Z","time spent":"470.248177ms","remote":"127.0.0.1:34326","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4087,"response count":0,"response size":40,"request content":"compare: success:> failure: >"} {"level":"info","ts":"2024-04-27T02:21:41.908497Z","caller":"traceutil/trace.go:171","msg":"trace[681039030] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"318.974122ms","start":"2024-04-27T02:21:41.5895Z","end":"2024-04-27T02:21:41.908474Z","steps":["trace[681039030] 'process raft request' (duration: 318.882352ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:21:41.908646Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:41.589461Z","time spent":"319.100425ms","remote":"127.0.0.1:43638","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7346,"response count":0,"response size":40,"request content":"compare: success:> failure: >"} {"level":"info","ts":"2024-04-27T02:21:41.908873Z","caller":"traceutil/trace.go:171","msg":"trace[1449749279] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"319.946054ms","start":"2024-04-27T02:21:41.588913Z","end":"2024-04-27T02:21:41.908859Z","steps":["trace[1449749279] 'process raft request' (duration: 319.280307ms)"],"step_count":1} {"level":"warn","ts":"2024-04-27T02:21:41.909017Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-27T02:21:41.588884Z","time spent":"320.021314ms","remote":"127.0.0.1:43534","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":642,"response count":0,"response size":40,"request content":"compare: success:> failure:<>"} {"level":"info","ts":"2024-04-27T02:21:42.001378Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"} {"level":"info","ts":"2024-04-27T02:21:42.001541Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"minikube","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]} {"level":"warn","ts":"2024-04-27T02:21:42.001649Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"} {"level":"warn","ts":"2024-04-27T02:21:42.001779Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"} {"level":"warn","ts":"2024-04-27T02:21:42.081723Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"} {"level":"warn","ts":"2024-04-27T02:21:42.081784Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"} {"level":"info","ts":"2024-04-27T02:21:42.081936Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"} {"level":"info","ts":"2024-04-27T02:21:42.182076Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2024-04-27T02:21:42.18319Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2024-04-27T02:21:42.191902Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"minikube","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]} * * ==> kernel <== * 02:52:38 up 1:08, 0 users, load average: 7.17, 7.64, 7.22 Linux minikube 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 22.04.3 LTS" * * ==> kube-apiserver [031a421d02e5] <== * "BalancerAttributes": null, "Type": 0, "Metadata": null }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" W0427 02:21:52.114672 1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to { "Addr": "127.0.0.1:2379", "ServerName": "127.0.0.1", "Attributes": null, "BalancerAttributes": null, "Type": 0, "Metadata": null }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" W0427 02:21:52.114899 1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to { "Addr": "127.0.0.1:2379", "ServerName": "127.0.0.1", "Attributes": null, "BalancerAttributes": null, "Type": 0, "Metadata": null }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" W0427 02:21:52.155979 1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to { "Addr": "127.0.0.1:2379", "ServerName": "127.0.0.1", "Attributes": null, "BalancerAttributes": null, "Type": 0, "Metadata": null }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" W0427 02:21:52.181567 1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to { "Addr": "127.0.0.1:2379", "ServerName": "127.0.0.1", "Attributes": null, "BalancerAttributes": null, "Type": 0, "Metadata": null }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" W0427 02:21:52.211143 1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to { "Addr": "127.0.0.1:2379", "ServerName": "127.0.0.1", "Attributes": null, "BalancerAttributes": null, "Type": 0, "Metadata": null }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" W0427 02:21:52.211107 1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to { "Addr": "127.0.0.1:2379", "ServerName": "127.0.0.1", "Attributes": null, "BalancerAttributes": null, "Type": 0, "Metadata": null }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" W0427 02:21:52.319048 1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to { "Addr": "127.0.0.1:2379", "ServerName": "127.0.0.1", "Attributes": null, "BalancerAttributes": null, "Type": 0, "Metadata": null }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" * * ==> kube-apiserver [32e924113141] <== * Trace[1077493716]: [681.39911ms] [681.39911ms] END I0427 02:43:47.477260 1 trace.go:236] Trace[1629219556]: "Get" accept:application/json, */*,audit-id:6ae8fe63-5712-4543-9205-1e9ad759d3d2,client:192.168.49.2,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (27-Apr-2024 02:43:46.705) (total time: 771ms): Trace[1629219556]: ---"About to write a response" 771ms (02:43:47.476) Trace[1629219556]: [771.556893ms] [771.556893ms] END I0427 02:43:47.478122 1 trace.go:236] Trace[1272330958]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:9c53f8f3-5f50-4897-af72-057e0134f9d7,client:192.168.49.2,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/default/deployments/grafana/status,user-agent:kube-controller-manager/v1.28.3 (linux/amd64) kubernetes/a8a1abc/system:serviceaccount:kube-system:deployment-controller,verb:PUT (27-Apr-2024 02:43:46.888) (total time: 589ms): Trace[1272330958]: ["GuaranteedUpdate etcd3" audit-id:9c53f8f3-5f50-4897-af72-057e0134f9d7,key:/deployments/default/grafana,type:*apps.Deployment,resource:deployments.apps 589ms (02:43:46.888) Trace[1272330958]: ---"About to Encode" 305ms (02:43:47.194) Trace[1272330958]: ---"Txn call completed" 282ms (02:43:47.477)] Trace[1272330958]: [589.918769ms] [589.918769ms] END E0427 02:43:50.674617 1 queueset.go:489] "Overflow" err="queueset::currentR overflow" QS="workload-low" when="2024-04-27 02:43:50.674571004" prevR="0.24387055ss" incrR="184467440737.09539546ss" currentR="0.24374985ss" E0427 02:43:52.077562 1 queueset.go:489] "Overflow" err="queueset::currentR overflow" QS="workload-low" when="2024-04-27 02:43:52.077476812" prevR="0.46623680ss" incrR="184467440737.09550324ss" currentR="0.46622388ss" E0427 02:43:52.077848 1 queueset.go:489] "Overflow" err="queueset::currentR overflow" QS="workload-low" when="2024-04-27 02:43:52.077801661" prevR="0.46658327ss" incrR="184467440737.09548160ss" currentR="0.46654871ss" I0427 02:45:03.076986 1 trace.go:236] Trace[294705040]: "Get" accept:application/json, */*,audit-id:feb8b9cf-e021-4782-bbf1-2d6163ce3322,client:192.168.49.2,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (27-Apr-2024 02:45:02.496) (total time: 580ms): Trace[294705040]: ---"About to write a response" 580ms (02:45:03.076) Trace[294705040]: [580.283008ms] [580.283008ms] END I0427 02:46:26.076354 1 trace.go:236] Trace[601000991]: "GuaranteedUpdate etcd3" audit-id:,key:/ranges/serviceips,type:*core.RangeAllocation,resource:serviceipallocations (27-Apr-2024 02:46:25.079) (total time: 996ms): Trace[601000991]: ---"initial value restored" 593ms (02:46:25.673) Trace[601000991]: ---"Txn call completed" 400ms (02:46:26.074) Trace[601000991]: [996.583577ms] [996.583577ms] END I0427 02:46:26.718606 1 alloc.go:330] "allocated clusterIPs" service="default/grafana-ext" clusterIPs={"IPv4":"10.110.213.126"} I0427 02:46:26.773239 1 trace.go:236] Trace[1552972739]: "Create" accept:application/json, */*,audit-id:2d61173d-a86e-472a-ab7f-774dd8292c46,client:192.168.49.1,protocol:HTTP/2.0,resource:services,scope:resource,url:/api/v1/namespaces/default/services,user-agent:kubectl/v4.2.0 (linux/amd64) kubernetes/592b165,verb:POST (27-Apr-2024 02:46:25.078) (total time: 1694ms): Trace[1552972739]: ---"Writing http response done" 54ms (02:46:26.773) Trace[1552972739]: [1.69491506s] [1.69491506s] END I0427 02:47:57.693639 1 trace.go:236] Trace[1782801184]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:e5c09af2-534b-4080-9281-9bf7a577e3fc,client:192.168.49.2,protocol:HTTP/2.0,resource:statefulsets,scope:resource,url:/apis/apps/v1/namespaces/default/statefulsets/prometheus-alertmanager/status,user-agent:kube-controller-manager/v1.28.3 (linux/amd64) kubernetes/a8a1abc/system:serviceaccount:kube-system:statefulset-controller,verb:PUT (27-Apr-2024 02:47:57.176) (total time: 516ms): Trace[1782801184]: ["GuaranteedUpdate etcd3" audit-id:e5c09af2-534b-4080-9281-9bf7a577e3fc,key:/statefulsets/default/prometheus-alertmanager,type:*apps.StatefulSet,resource:statefulsets.apps 516ms (02:47:57.177) Trace[1782801184]: ---"About to Encode" 106ms (02:47:57.283) Trace[1782801184]: ---"Txn call completed" 408ms (02:47:57.692)] Trace[1782801184]: [516.667669ms] [516.667669ms] END I0427 02:48:11.882568 1 trace.go:236] Trace[1957124797]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:f687baf2-f89d-44a1-ab8c-cad570d78305,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-eqt674mfxb4j56mrjjkoe7b7ii,user-agent:kube-apiserver/v1.28.3 (linux/amd64) kubernetes/a8a1abc,verb:PUT (27-Apr-2024 02:48:11.380) (total time: 502ms): Trace[1957124797]: [502.390553ms] [502.390553ms] END I0427 02:48:18.357058 1 trace.go:236] Trace[977005470]: "Update" accept:application/json, */*,audit-id:eaac06c5-49a4-41b9-ab29-4fc02bca5bfd,client:192.168.49.2,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (27-Apr-2024 02:48:17.817) (total time: 539ms): Trace[977005470]: ["GuaranteedUpdate etcd3" audit-id:eaac06c5-49a4-41b9-ab29-4fc02bca5bfd,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 528ms (02:48:17.828) Trace[977005470]: ---"About to Encode" 194ms (02:48:18.022) Trace[977005470]: ---"Txn call completed" 333ms (02:48:18.356)] Trace[977005470]: [539.128523ms] [539.128523ms] END I0427 02:48:30.683954 1 trace.go:236] Trace[1398022472]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:50f4e3e9-8c0e-4f3b-8619-44df411d1cc3,client:192.168.49.2,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/default/deployments/prometheus-server/status,user-agent:kube-controller-manager/v1.28.3 (linux/amd64) kubernetes/a8a1abc/system:serviceaccount:kube-system:deployment-controller,verb:PUT (27-Apr-2024 02:48:30.182) (total time: 500ms): Trace[1398022472]: ---"Writing http response done" 101ms (02:48:30.683) Trace[1398022472]: [500.88239ms] [500.88239ms] END I0427 02:48:50.911659 1 trace.go:236] Trace[1301493672]: "Update" accept:application/json, */*,audit-id:a0d9c01f-70ab-4146-86bb-1331f2451970,client:192.168.49.2,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (27-Apr-2024 02:48:49.033) (total time: 1877ms): Trace[1301493672]: ["GuaranteedUpdate etcd3" audit-id:a0d9c01f-70ab-4146-86bb-1331f2451970,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 1877ms (02:48:49.034) Trace[1301493672]: ---"Txn call completed" 1876ms (02:48:50.911)] Trace[1301493672]: [1.877672769s] [1.877672769s] END I0427 02:48:51.672874 1 trace.go:236] Trace[888397599]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.49.2,type:*v1.Endpoints,resource:apiServerIPInfo (27-Apr-2024 02:48:49.812) (total time: 1860ms): Trace[888397599]: ---"initial value restored" 1099ms (02:48:50.912) Trace[888397599]: ---"Transaction prepared" 280ms (02:48:51.192) Trace[888397599]: ---"Txn call completed" 480ms (02:48:51.672) Trace[888397599]: [1.860472962s] [1.860472962s] END I0427 02:49:00.672472 1 trace.go:236] Trace[1947150765]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.49.2,type:*v1.Endpoints,resource:apiServerIPInfo (27-Apr-2024 02:49:00.073) (total time: 599ms): Trace[1947150765]: ---"initial value restored" 408ms (02:49:00.481) Trace[1947150765]: ---"Transaction prepared" 100ms (02:49:00.582) Trace[1947150765]: ---"Txn call completed" 89ms (02:49:00.672) Trace[1947150765]: [599.324355ms] [599.324355ms] END I0427 02:51:10.794283 1 trace.go:236] Trace[1742659765]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.49.2,type:*v1.Endpoints,resource:apiServerIPInfo (27-Apr-2024 02:51:10.286) (total time: 507ms): Trace[1742659765]: ---"Transaction prepared" 265ms (02:51:10.579) Trace[1742659765]: ---"Txn call completed" 214ms (02:51:10.794) Trace[1742659765]: [507.95349ms] [507.95349ms] END I0427 02:51:30.872669 1 trace.go:236] Trace[1409213612]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.49.2,type:*v1.Endpoints,resource:apiServerIPInfo (27-Apr-2024 02:51:30.372) (total time: 500ms): Trace[1409213612]: ---"Transaction prepared" 287ms (02:51:30.687) Trace[1409213612]: ---"Txn call completed" 185ms (02:51:30.872) Trace[1409213612]: [500.458615ms] [500.458615ms] END * * ==> kube-controller-manager [5f82d367b76b] <== * I0427 02:23:37.495210 1 serving.go:348] Generated self-signed cert in-memory I0427 02:23:38.781828 1 controllermanager.go:189] "Starting" version="v1.28.3" I0427 02:23:38.782437 1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0427 02:23:38.785845 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0427 02:23:38.786358 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0427 02:23:38.875108 1 secure_serving.go:213] Serving securely on 127.0.0.1:10257 I0427 02:23:38.875311 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" E0427 02:23:58.894884 1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\nhealthz check failed\") has prevented the request from succeeding" * * ==> kube-controller-manager [f6536c32b376] <== * I0427 02:24:57.676101 1 shared_informer.go:318] Caches are synced for GC I0427 02:24:57.677919 1 shared_informer.go:318] Caches are synced for PV protection I0427 02:24:57.678065 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="minikube" I0427 02:24:57.678461 1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal" I0427 02:24:57.681876 1 shared_informer.go:318] Caches are synced for cronjob I0427 02:24:57.682272 1 event.go:307] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0427 02:24:57.697278 1 shared_informer.go:318] Caches are synced for stateful set I0427 02:24:57.697333 1 shared_informer.go:318] Caches are synced for bootstrap_signer I0427 02:24:57.773247 1 shared_informer.go:318] Caches are synced for resource quota I0427 02:24:57.774073 1 shared_informer.go:318] Caches are synced for disruption I0427 02:24:57.774267 1 shared_informer.go:318] Caches are synced for attach detach I0427 02:24:57.774521 1 shared_informer.go:318] Caches are synced for endpoint_slice I0427 02:24:57.775406 1 shared_informer.go:318] Caches are synced for crt configmap I0427 02:24:57.775803 1 shared_informer.go:318] Caches are synced for persistent volume I0427 02:24:57.777917 1 shared_informer.go:318] Caches are synced for resource quota I0427 02:24:57.873529 1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator I0427 02:24:57.882482 1 shared_informer.go:311] Waiting for caches to sync for garbage collector I0427 02:24:57.997020 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="424.104122ms" I0427 02:24:58.072965 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="185.349µs" I0427 02:24:58.178864 1 shared_informer.go:318] Caches are synced for garbage collector I0427 02:24:58.184048 1 shared_informer.go:318] Caches are synced for garbage collector I0427 02:24:58.184295 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage" I0427 02:28:55.281562 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/sample-python-app-5c4ff9d694" duration="16.569µs" I0427 02:38:22.808646 1 event.go:307] "Event occurred" object="default/prometheus-server" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered." I0427 02:38:22.877697 1 event.go:307] "Event occurred" object="default/prometheus-server" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered." I0427 02:38:25.976733 1 event.go:307] "Event occurred" object="default/prometheus-prometheus-node-exporter" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: prometheus-prometheus-node-exporter-rmfcg" I0427 02:38:26.074478 1 event.go:307] "Event occurred" object="default/prometheus-kube-state-metrics" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set prometheus-kube-state-metrics-6b7d7b9bd9 to 1" I0427 02:38:26.089007 1 event.go:307] "Event occurred" object="default/prometheus-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set prometheus-server-579dc9cfdf to 1" I0427 02:38:26.089047 1 event.go:307] "Event occurred" object="default/prometheus-prometheus-pushgateway" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set prometheus-prometheus-pushgateway-568fbf799 to 1" I0427 02:38:26.284989 1 event.go:307] "Event occurred" object="default/prometheus-kube-state-metrics-6b7d7b9bd9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: prometheus-kube-state-metrics-6b7d7b9bd9-shbt6" I0427 02:38:26.491580 1 event.go:307] "Event occurred" object="default/prometheus-prometheus-pushgateway-568fbf799" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: prometheus-prometheus-pushgateway-568fbf799-qlhwx" I0427 02:38:26.491621 1 event.go:307] "Event occurred" object="default/prometheus-server-579dc9cfdf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: prometheus-server-579dc9cfdf-jz9x9" I0427 02:38:26.514250 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-kube-state-metrics-6b7d7b9bd9" duration="439.329863ms" I0427 02:38:26.695583 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-prometheus-pushgateway-568fbf799" duration="606.142347ms" I0427 02:38:28.373762 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-server-579dc9cfdf" duration="2.283722895s" I0427 02:38:28.486842 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-kube-state-metrics-6b7d7b9bd9" duration="1.972387077s" I0427 02:38:28.487031 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-kube-state-metrics-6b7d7b9bd9" duration="92.832µs" I0427 02:38:28.792550 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-server-579dc9cfdf" duration="418.710134ms" I0427 02:38:28.976300 1 event.go:307] "Event occurred" object="default/prometheus-alertmanager" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim storage-prometheus-alertmanager-0 Pod prometheus-alertmanager-0 in StatefulSet prometheus-alertmanager success" I0427 02:38:29.406438 1 event.go:307] "Event occurred" object="default/storage-prometheus-alertmanager-0" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered." I0427 02:38:29.781163 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-kube-state-metrics-6b7d7b9bd9" duration="1.389119ms" I0427 02:38:29.783844 1 event.go:307] "Event occurred" object="default/prometheus-alertmanager" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod prometheus-alertmanager-0 in StatefulSet prometheus-alertmanager successful" I0427 02:38:29.984289 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-prometheus-pushgateway-568fbf799" duration="3.288502647s" I0427 02:38:29.984500 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-prometheus-pushgateway-568fbf799" duration="145.279µs" I0427 02:38:30.880529 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-server-579dc9cfdf" duration="2.087890971s" I0427 02:38:30.880741 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-server-579dc9cfdf" duration="139.825µs" I0427 02:38:31.175552 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-server-579dc9cfdf" duration="132.937µs" I0427 02:38:32.575054 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-prometheus-pushgateway-568fbf799" duration="127.178µs" I0427 02:43:45.373440 1 event.go:307] "Event occurred" object="default/grafana" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grafana-6f756986c7 to 1" I0427 02:43:47.188477 1 event.go:307] "Event occurred" object="default/grafana-6f756986c7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grafana-6f756986c7-z2c7z" I0427 02:43:47.575848 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/grafana-6f756986c7" duration="2.2025907s" I0427 02:43:47.974398 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/grafana-6f756986c7" duration="398.472325ms" I0427 02:43:47.974937 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/grafana-6f756986c7" duration="155.322µs" I0427 02:43:49.181433 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/grafana-6f756986c7" duration="140.216µs" I0427 02:43:54.482360 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-kube-state-metrics-6b7d7b9bd9" duration="121.172µs" I0427 02:43:55.599681 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-kube-state-metrics-6b7d7b9bd9" duration="395.090978ms" I0427 02:43:55.600344 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-kube-state-metrics-6b7d7b9bd9" duration="112.74µs" I0427 02:44:57.775597 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-prometheus-pushgateway-568fbf799" duration="192.679µs" I0427 02:45:07.880364 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-prometheus-pushgateway-568fbf799" duration="580.013ms" I0427 02:45:07.880548 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/prometheus-prometheus-pushgateway-568fbf799" duration="113.8µs" * * ==> kube-proxy [161a3e514a15] <== * I0427 02:24:38.190325 1 server_others.go:69] "Using iptables proxy" I0427 02:24:38.492419 1 node.go:141] Successfully retrieved node IP: 192.168.49.2 I0427 02:24:39.594471 1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4" I0427 02:24:39.673101 1 server_others.go:152] "Using iptables Proxier" I0427 02:24:39.673172 1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6" I0427 02:24:39.673189 1 server_others.go:438] "Defaulting to no-op detect-local" I0427 02:24:39.673234 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0427 02:24:39.673562 1 server.go:846] "Version info" version="v1.28.3" I0427 02:24:39.673598 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0427 02:24:39.703405 1 config.go:315] "Starting node config controller" I0427 02:24:39.703428 1 shared_informer.go:311] Waiting for caches to sync for node config I0427 02:24:39.703448 1 config.go:188] "Starting service config controller" I0427 02:24:39.703466 1 shared_informer.go:311] Waiting for caches to sync for service config I0427 02:24:39.703425 1 config.go:97] "Starting endpoint slice config controller" I0427 02:24:39.703452 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config I0427 02:24:39.803854 1 shared_informer.go:318] Caches are synced for service config I0427 02:24:39.805282 1 shared_informer.go:318] Caches are synced for endpoint slice config I0427 02:24:40.003789 1 shared_informer.go:318] Caches are synced for node config * * ==> kube-proxy [c07f5aedabbf] <== * I0427 02:06:59.205276 1 server_others.go:69] "Using iptables proxy" I0427 02:06:59.408136 1 node.go:141] Successfully retrieved node IP: 192.168.49.2 I0427 02:07:00.998904 1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4" I0427 02:07:01.280066 1 server_others.go:152] "Using iptables Proxier" I0427 02:07:01.300001 1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6" I0427 02:07:01.300024 1 server_others.go:438] "Defaulting to no-op detect-local" I0427 02:07:01.306856 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0427 02:07:01.311846 1 server.go:846] "Version info" version="v1.28.3" I0427 02:07:01.311884 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0427 02:07:01.396533 1 config.go:188] "Starting service config controller" I0427 02:07:01.397250 1 shared_informer.go:311] Waiting for caches to sync for service config I0427 02:07:01.398109 1 config.go:315] "Starting node config controller" I0427 02:07:01.398128 1 shared_informer.go:311] Waiting for caches to sync for node config I0427 02:07:01.398451 1 config.go:97] "Starting endpoint slice config controller" I0427 02:07:01.398470 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config I0427 02:07:01.781925 1 shared_informer.go:318] Caches are synced for node config I0427 02:07:01.782352 1 shared_informer.go:318] Caches are synced for endpoint slice config I0427 02:07:01.798253 1 shared_informer.go:318] Caches are synced for service config I0427 02:21:39.096474 1 trace.go:236] Trace[489366960]: "iptables ChainExists" (27-Apr-2024 02:21:35.974) (total time: 3121ms): Trace[489366960]: [3.121627762s] [3.121627762s] END * * ==> kube-scheduler [116834c2dbca] <== * E0427 02:05:41.289672 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0427 02:05:41.289821 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0427 02:05:41.316870 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0427 02:05:41.316913 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0427 02:05:41.289852 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0427 02:05:41.900821 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0427 02:05:41.974678 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0427 02:05:42.686993 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0427 02:05:42.691516 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0427 02:05:43.282595 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0427 02:05:43.282695 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0427 02:05:46.686175 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0427 02:05:46.686243 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0427 02:05:47.473153 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0427 02:05:47.473249 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0427 02:05:48.093569 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0427 02:05:48.093633 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0427 02:05:48.979099 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0427 02:05:48.979157 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0427 02:05:49.583813 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0427 02:05:49.583869 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0427 02:05:50.177329 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0427 02:05:50.177397 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0427 02:05:50.579771 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0427 02:05:50.582731 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0427 02:05:50.980978 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0427 02:05:50.997877 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0427 02:05:51.573961 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0427 02:05:51.574822 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0427 02:05:51.579988 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0427 02:05:51.580109 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0427 02:05:51.590864 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0427 02:05:51.590905 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0427 02:05:52.080575 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0427 02:05:52.080653 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0427 02:05:53.087209 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0427 02:05:53.088855 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0427 02:05:54.105684 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0427 02:05:54.178103 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0427 02:05:54.691145 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0427 02:05:54.691206 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope I0427 02:06:18.013080 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file W0427 02:21:35.973594 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.CSINode ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:35.973661 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:35.973980 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:35.982045 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:35.982227 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.ReplicaSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:35.982312 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:35.982456 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:35.982540 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:35.982625 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.ReplicationController ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:35.982697 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Namespace ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:36.390194 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.PodDisruptionBudget ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:36.393584 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:36.587513 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0427 02:21:36.610760 1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.StatefulSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0427 02:21:41.697717 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259 I0427 02:21:41.714653 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" I0427 02:21:41.716296 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" E0427 02:21:41.784480 1 run.go:74] "command failed" err="finished without leader elect" * * ==> kube-scheduler [a7c30ea775a8] <== * I0427 02:23:35.786397 1 serving.go:348] Generated self-signed cert in-memory W0427 02:23:47.681068 1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout W0427 02:23:47.681252 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. W0427 02:23:47.681271 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0427 02:23:57.883033 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3" I0427 02:23:57.883205 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0427 02:23:57.894925 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259 I0427 02:23:57.895929 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0427 02:23:57.895963 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0427 02:23:57.895995 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0427 02:23:58.972471 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * Apr 27 02:45:42 minikube kubelet[1563]: E0427 02:45:42.491784 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.419s" Apr 27 02:45:53 minikube kubelet[1563]: I0427 02:45:53.189701 1563 kuberuntime_container_linux.go:167] "No swap cgroup controller present" swapBehavior="" pod="default/prometheus-server-579dc9cfdf-jz9x9" containerName="prometheus-server-configmap-reload" Apr 27 02:45:58 minikube kubelet[1563]: E0427 02:45:58.080451 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.103s" Apr 27 02:46:01 minikube kubelet[1563]: E0427 02:46:01.180044 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.202s" Apr 27 02:46:02 minikube kubelet[1563]: E0427 02:46:02.597397 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.405s" Apr 27 02:46:10 minikube kubelet[1563]: E0427 02:46:10.280377 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.202s" Apr 27 02:46:12 minikube kubelet[1563]: E0427 02:46:12.186792 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.204s" Apr 27 02:46:16 minikube kubelet[1563]: E0427 02:46:16.497428 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.522s" Apr 27 02:46:18 minikube kubelet[1563]: E0427 02:46:18.991570 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.012s" Apr 27 02:46:20 minikube kubelet[1563]: E0427 02:46:20.473443 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.396s" Apr 27 02:46:22 minikube kubelet[1563]: E0427 02:46:22.714715 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.734s" Apr 27 02:46:28 minikube kubelet[1563]: E0427 02:46:28.200885 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.213s" Apr 27 02:46:44 minikube kubelet[1563]: E0427 02:46:44.276249 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.294s" Apr 27 02:46:59 minikube kubelet[1563]: E0427 02:46:59.983920 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.002s" Apr 27 02:47:02 minikube kubelet[1563]: E0427 02:47:02.573349 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.575s" Apr 27 02:47:04 minikube kubelet[1563]: E0427 02:47:04.878643 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.784s" Apr 27 02:47:22 minikube kubelet[1563]: E0427 02:47:22.291691 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.097s" Apr 27 02:47:41 minikube kubelet[1563]: E0427 02:47:41.300943 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.325s" Apr 27 02:47:43 minikube kubelet[1563]: I0427 02:47:43.694853 1563 kuberuntime_container_linux.go:167] "No swap cgroup controller present" swapBehavior="" pod="default/prometheus-alertmanager-0" containerName="alertmanager" Apr 27 02:47:54 minikube kubelet[1563]: E0427 02:47:54.687514 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.698s" Apr 27 02:47:56 minikube kubelet[1563]: E0427 02:47:56.485200 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.273s" Apr 27 02:47:57 minikube kubelet[1563]: I0427 02:47:57.076688 1563 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/prometheus-alertmanager-0" podStartSLOduration=47.161332174 podCreationTimestamp="2024-04-27 02:38:28 +0000 UTC" firstStartedPulling="2024-04-27 02:39:01.676875355 +0000 UTC m=+956.051561373" lastFinishedPulling="2024-04-27 02:47:43.592145952 +0000 UTC m=+1477.966832009" observedRunningTime="2024-04-27 02:47:55.874915482 +0000 UTC m=+1490.249601609" watchObservedRunningTime="2024-04-27 02:47:57.07660281 +0000 UTC m=+1491.451288827" Apr 27 02:47:59 minikube kubelet[1563]: E0427 02:47:59.495597 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.516s" Apr 27 02:48:01 minikube kubelet[1563]: E0427 02:48:01.684813 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.189s" Apr 27 02:48:04 minikube kubelet[1563]: E0427 02:48:04.177218 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.491s" Apr 27 02:48:05 minikube kubelet[1563]: E0427 02:48:05.495283 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.317s" Apr 27 02:48:10 minikube kubelet[1563]: E0427 02:48:10.696297 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.716s" Apr 27 02:48:12 minikube kubelet[1563]: E0427 02:48:12.693316 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.712s" Apr 27 02:48:18 minikube kubelet[1563]: E0427 02:48:18.660455 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.585s" Apr 27 02:48:21 minikube kubelet[1563]: E0427 02:48:21.991386 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.004s" Apr 27 02:48:27 minikube kubelet[1563]: E0427 02:48:27.075429 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.998s" Apr 27 02:48:30 minikube kubelet[1563]: E0427 02:48:30.297629 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.283s" Apr 27 02:48:32 minikube kubelet[1563]: E0427 02:48:32.484649 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.204s" Apr 27 02:48:38 minikube kubelet[1563]: E0427 02:48:38.229908 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.254s" Apr 27 02:48:40 minikube kubelet[1563]: E0427 02:48:40.773869 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.794s" Apr 27 02:48:44 minikube kubelet[1563]: E0427 02:48:44.987317 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.996s" Apr 27 02:48:45 minikube kubelet[1563]: E0427 02:48:45.989802 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.002s" Apr 27 02:48:51 minikube kubelet[1563]: E0427 02:48:51.107824 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.113s" Apr 27 02:48:53 minikube kubelet[1563]: E0427 02:48:53.093804 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.98s" Apr 27 02:48:54 minikube kubelet[1563]: E0427 02:48:54.724967 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.553s" Apr 27 02:48:56 minikube kubelet[1563]: E0427 02:48:56.197620 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.206s" Apr 27 02:49:01 minikube kubelet[1563]: E0427 02:49:01.103001 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.127s" Apr 27 02:49:10 minikube kubelet[1563]: E0427 02:49:10.674578 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.691s" Apr 27 02:49:13 minikube kubelet[1563]: E0427 02:49:13.297206 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.312s" Apr 27 02:49:14 minikube kubelet[1563]: E0427 02:49:14.773488 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.476s" Apr 27 02:49:30 minikube kubelet[1563]: E0427 02:49:30.200268 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.224s" Apr 27 02:49:32 minikube kubelet[1563]: E0427 02:49:32.384554 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.39s" Apr 27 02:50:06 minikube kubelet[1563]: E0427 02:50:06.186752 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.191s" Apr 27 02:50:44 minikube kubelet[1563]: E0427 02:50:44.085590 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.013s" Apr 27 02:51:00 minikube kubelet[1563]: E0427 02:51:00.103920 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.112s" Apr 27 02:51:08 minikube kubelet[1563]: E0427 02:51:08.083900 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.1s" Apr 27 02:51:16 minikube kubelet[1563]: E0427 02:51:16.473296 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.399s" Apr 27 02:51:35 minikube kubelet[1563]: E0427 02:51:35.082952 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.104s" Apr 27 02:51:36 minikube kubelet[1563]: E0427 02:51:36.295707 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.212s" Apr 27 02:51:51 minikube kubelet[1563]: E0427 02:51:51.092929 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.117s" Apr 27 02:52:08 minikube kubelet[1563]: E0427 02:52:08.510253 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.523s" Apr 27 02:52:27 minikube kubelet[1563]: E0427 02:52:27.501632 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.511s" Apr 27 02:52:41 minikube kubelet[1563]: E0427 02:52:41.582592 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.607s" Apr 27 02:52:44 minikube kubelet[1563]: E0427 02:52:44.286544 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.288s" Apr 27 02:52:48 minikube kubelet[1563]: E0427 02:52:48.291899 1563 kubelet.go:2477] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.295s" * * ==> storage-provisioner [1a86c17b1419] <== * I0427 02:26:55.182544 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0427 02:26:55.692879 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0427 02:26:55.692981 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0427 02:27:12.010325 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0427 02:27:12.022302 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"95e832b0-0a84-4fa6-bb29-fed961547076", APIVersion:"v1", ResourceVersion:"961", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_3060e2c4-8fc2-4921-898c-b5550510a81f became leader I0427 02:27:12.022569 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_3060e2c4-8fc2-4921-898c-b5550510a81f! I0427 02:27:12.985473 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_3060e2c4-8fc2-4921-898c-b5550510a81f! I0427 02:38:22.902436 1 controller.go:1332] provision "default/prometheus-server" class "standard": started I0427 02:38:22.902572 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 97399377-ea8c-4c71-a43d-17d9d7ef361b 399 0 2024-04-27 02:06:56 +0000 UTC map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"} storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-04-27 02:06:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-66b504c4-911e-4217-ae4f-8a8826b7f655 &PersistentVolumeClaim{ObjectMeta:{prometheus-server default 66b504c4-911e-4217-ae4f-8a8826b7f655 1459 0 2024-04-27 02:38:21 +0000 UTC map[app.kubernetes.io/component:server app.kubernetes.io/instance:prometheus app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:prometheus app.kubernetes.io/part-of:prometheus app.kubernetes.io/version:v2.51.2 helm.sh/chart:prometheus-25.20.1] map[meta.helm.sh/release-name:prometheus meta.helm.sh/release-namespace:default volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{helm Update v1 2024-04-27 02:38:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:helm.sh/chart":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}} {kube-controller-manager Update v1 2024-04-27 02:38:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{8589934592 0} {} BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/prometheus-server I0427 02:38:22.995931 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"prometheus-server", UID:"66b504c4-911e-4217-ae4f-8a8826b7f655", APIVersion:"v1", ResourceVersion:"1459", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/prometheus-server" I0427 02:38:23.077677 1 controller.go:1439] provision "default/prometheus-server" class "standard": volume "pvc-66b504c4-911e-4217-ae4f-8a8826b7f655" provisioned I0427 02:38:23.077793 1 controller.go:1456] provision "default/prometheus-server" class "standard": succeeded I0427 02:38:23.094950 1 volume_store.go:212] Trying to save persistentvolume "pvc-66b504c4-911e-4217-ae4f-8a8826b7f655" I0427 02:38:23.286359 1 volume_store.go:219] persistentvolume "pvc-66b504c4-911e-4217-ae4f-8a8826b7f655" saved I0427 02:38:23.292889 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"prometheus-server", UID:"66b504c4-911e-4217-ae4f-8a8826b7f655", APIVersion:"v1", ResourceVersion:"1459", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-66b504c4-911e-4217-ae4f-8a8826b7f655 I0427 02:38:29.786338 1 controller.go:1332] provision "default/storage-prometheus-alertmanager-0" class "standard": started I0427 02:38:29.786445 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 97399377-ea8c-4c71-a43d-17d9d7ef361b 399 0 2024-04-27 02:06:56 +0000 UTC map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"} storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-04-27 02:06:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-247f997b-bab2-48da-bb78-6888c6b63a94 &PersistentVolumeClaim{ObjectMeta:{storage-prometheus-alertmanager-0 default 247f997b-bab2-48da-bb78-6888c6b63a94 1544 0 2024-04-27 02:38:28 +0000 UTC map[app.kubernetes.io/instance:prometheus app.kubernetes.io/name:alertmanager] map[volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2024-04-27 02:38:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/name":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/storage-prometheus-alertmanager-0 I0427 02:38:29.873700 1 controller.go:1439] provision "default/storage-prometheus-alertmanager-0" class "standard": volume "pvc-247f997b-bab2-48da-bb78-6888c6b63a94" provisioned I0427 02:38:29.873748 1 controller.go:1456] provision "default/storage-prometheus-alertmanager-0" class "standard": succeeded I0427 02:38:29.873759 1 volume_store.go:212] Trying to save persistentvolume "pvc-247f997b-bab2-48da-bb78-6888c6b63a94" I0427 02:38:29.874436 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"storage-prometheus-alertmanager-0", UID:"247f997b-bab2-48da-bb78-6888c6b63a94", APIVersion:"v1", ResourceVersion:"1544", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/storage-prometheus-alertmanager-0" I0427 02:38:30.574152 1 volume_store.go:219] persistentvolume "pvc-247f997b-bab2-48da-bb78-6888c6b63a94" saved I0427 02:38:30.574641 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"storage-prometheus-alertmanager-0", UID:"247f997b-bab2-48da-bb78-6888c6b63a94", APIVersion:"v1", ResourceVersion:"1544", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-247f997b-bab2-48da-bb78-6888c6b63a94 * * ==> storage-provisioner [c73337051432] <== * I0427 02:24:42.201418 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0427 02:25:12.229615 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout