Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1" #2972

Closed
dxps opened this issue Oct 19, 2022 · 18 comments
Assignees
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@dxps
Copy link

dxps commented Oct 19, 2022

Dear KinD community,

I'd like to readdress (from here, btw @BenTheElder thanks a lot for the feedback) this error:

ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"

What happened:

  • creating the 1st cluster runs fine
  • creating a 2nd cluster fails

What you expected to happen:

Have it all running fine: the second cluster be created as well.


How to reproduce it (as minimally and precisely as possible):

Starting from a clean state (ok, not purely clean; deleted any kind cluster), I create a first multi-node cluster using this config:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster

nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker

and the cluster is successfully created.

Now, trying to create a second multi-node cluster with an ingress (as per this nice kind doc) based on this config:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster

nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
- role: worker

the cluster creation fails:

❯ kind create cluster --name istioinaction --config istioinaction_cluster.yaml --retain; kind export logs --name=istioinaction
Creating cluster "istioinaction" ...
 ✓ Ensuring node image (kindest/node:v1.25.2) 🖼
 ✗ Preparing nodes 📦 📦 📦  
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"
Exporting logs for cluster "istioinaction" to:
/tmp/297015354
ERROR: [command "docker exec --privileged istioinaction-worker2 sh -c 'tar --hard-dereference -C /var/log/ -chf - . || (r=$?; [ $r -eq 1 ] || exit $r)'" failed with error: exit status 1, [[command "docker exec --privileged istioinaction-worker2 cat /kind/version" failed with error: exit status 1, command "docker exec --privileged istioinaction-worker2 journalctl --no-pager" failed with error: exit status 1, command "docker exec --privileged istioinaction-worker2 journalctl --no-pager -u containerd.service" failed with error: exit status 1, command "docker exec --privileged istioinaction-worker2 journalctl --no-pager -u kubelet.service" failed with error: exit status 1, command "docker exec --privileged istioinaction-worker2 crictl images" failed with error: exit status 1], command "docker exec --privileged istioinaction-control-plane crictl images" failed with error: exit status 1, command "docker exec --privileged istioinaction-worker crictl images" failed with error: exit status 1]]
❯

The logs export (to that /tmp/297015354 folder) collected these files and folders: docker-info.txt istioinaction-control-plane istioinaction-worker istioinaction-worker2 kind-version.txt.

A brief look for erros in it shows different details:

…/297015354 ❯ grep -Ri error *
istioinaction-control-plane/journal.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.832102115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.19.0-76051900-generic\\n\"): skip plugin" type=io.containerd.snapshotter.v1
istioinaction-control-plane/journal.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.832286765Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
istioinaction-control-plane/journal.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.832314360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
istioinaction-control-plane/journal.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.833018086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
istioinaction-control-plane/journal.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.833153084Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
istioinaction-control-plane/journal.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.833932795Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
istioinaction-control-plane/journal.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.833954489Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
istioinaction-control-plane/journal.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.834239050Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:/etc/containerd/cri-base.json NetworkPluginConfDir: NetworkPluginMaxConfNum:0} test-handler:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:/etc/containerd/cri-base.json NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[k8s.gcr.io:{Endpoints:[https://registry.k8s.io https://k8s.gcr.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.7 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
istioinaction-control-plane/journal.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.834481935Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"
istioinaction-control-plane/containerd.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.832102115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.19.0-76051900-generic\\n\"): skip plugin" type=io.containerd.snapshotter.v1
istioinaction-control-plane/containerd.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.832286765Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
istioinaction-control-plane/containerd.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.832314360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
istioinaction-control-plane/containerd.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.833018086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
istioinaction-control-plane/containerd.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.833153084Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
istioinaction-control-plane/containerd.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.833932795Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
istioinaction-control-plane/containerd.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.833954489Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
istioinaction-control-plane/containerd.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.834239050Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:/etc/containerd/cri-base.json NetworkPluginConfDir: NetworkPluginMaxConfNum:0} test-handler:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:/etc/containerd/cri-base.json NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[k8s.gcr.io:{Endpoints:[https://registry.k8s.io https://k8s.gcr.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.7 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
istioinaction-control-plane/containerd.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.834481935Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"
istioinaction-control-plane/inspect.json:            "Error": "",
istioinaction-control-plane/images.log:E1019 06:54:40.612221     140 remote_image.go:125] "ListImages with filter from image service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},},}"
istioinaction-control-plane/images.log:time="2022-10-19T06:54:40Z" level=fatal msg="listing images: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService"
istioinaction-worker/journal.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.478730633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.19.0-76051900-generic\\n\"): skip plugin" type=io.containerd.snapshotter.v1
istioinaction-worker/journal.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.479064829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
istioinaction-worker/journal.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.479120664Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
istioinaction-worker/journal.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.479515619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
istioinaction-worker/journal.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.479786921Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
istioinaction-worker/journal.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.481060666Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
istioinaction-worker/journal.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.481106352Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
istioinaction-worker/journal.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.481498498Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:/etc/containerd/cri-base.json NetworkPluginConfDir: NetworkPluginMaxConfNum:0} test-handler:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:/etc/containerd/cri-base.json NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[k8s.gcr.io:{Endpoints:[https://registry.k8s.io https://k8s.gcr.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.7 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
istioinaction-worker/journal.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.481860970Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"
istioinaction-worker/containerd.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.478730633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.19.0-76051900-generic\\n\"): skip plugin" type=io.containerd.snapshotter.v1
istioinaction-worker/containerd.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.479064829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
istioinaction-worker/containerd.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.479120664Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
istioinaction-worker/containerd.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.479515619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
istioinaction-worker/containerd.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.479786921Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
istioinaction-worker/containerd.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.481060666Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
istioinaction-worker/containerd.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.481106352Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
istioinaction-worker/containerd.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.481498498Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:/etc/containerd/cri-base.json NetworkPluginConfDir: NetworkPluginMaxConfNum:0} test-handler:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:/etc/containerd/cri-base.json NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[k8s.gcr.io:{Endpoints:[https://registry.k8s.io https://k8s.gcr.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.7 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
istioinaction-worker/containerd.log:Oct 19 06:54:39 istioinaction-worker containerd[105]: time="2022-10-19T06:54:39.481860970Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"
istioinaction-worker/inspect.json:            "Error": "",
istioinaction-worker/images.log:E1019 06:54:40.606981     126 remote_image.go:125] "ListImages with filter from image service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},},}"
istioinaction-worker/images.log:time="2022-10-19T06:54:40Z" level=fatal msg="listing images: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService"
istioinaction-worker2/journal.log:Error response from daemon: Container 8838abd1202c757ad92140b2d59fe13a61372c8a3c3b6a9f5329ba51b4c78d5a is not running
istioinaction-worker2/containerd.log:Error response from daemon: Container 8838abd1202c757ad92140b2d59fe13a61372c8a3c3b6a9f5329ba51b4c78d5a is not running
istioinaction-worker2/kubelet.log:Error response from daemon: Container 8838abd1202c757ad92140b2d59fe13a61372c8a3c3b6a9f5329ba51b4c78d5a is not running
istioinaction-worker2/inspect.json:            "Error": "",
istioinaction-worker2/images.log:Error response from daemon: Container 8838abd1202c757ad92140b2d59fe13a61372c8a3c3b6a9f5329ba51b4c78d5a is not running
istioinaction-worker2/kubernetes-version.txt:Error response from daemon: Container 8838abd1202c757ad92140b2d59fe13a61372c8a3c3b6a9f5329ba51b4c78d5a is not running
…/297015354 ❯ 

I'll try to address such error and get back here with updates:

istioinaction-control-plane/journal.log:Oct 19 06:54:39 istioinaction-control-plane containerd[106]: time="2022-10-19T06:54:39.834481935Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="failed to create CRI service: failed to create cni conf monitor for default: failed to create fsnotify watcher: too many open files"

Anything else we need to know?:

Interesting is that it happens in one of my two systems (a workstation and laptop), both running the same up-to-date Linux distro, with the same versions of the tools (captured in the Environment section below).


Environment:

  • kind version: (use kind version): v0.16.0
  • Kubernetes version: (use kubectl version): client v1.25.3, server v1.25.2
  • Docker version: (use docker info): 20.10.12
  • OS (e.g. from /etc/os-release): Pop!_OS 22.04 LTS (nothing custom, all stock and up to date, running kernel 5.19.0-76051900-generic)

Thanks




Update

Tried again after increasing the max number of open files, at least to eliminate one type of issues.

  • Increased the limit (as per this article)
    • I had to restart the OS, since just restarting the terminal or even relogin didn't reflect the updated value
    • After restart and relogin, ulimit -n now returns 4096 (instead of 1024).
  • Removed again both kind clusters and recreate them
    • As before, first creation succedded, the second creation failed, see below the details
…/istioinaction_cluster ❯ kind create cluster --name istioinaction --config istioinaction_cluster.yaml --retain; kind export logs --name=istioinaction
Creating cluster "istioinaction" ...
 ✓ Ensuring node image (kindest/node:v1.25.2) 🖼
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✗ Starting control-plane 🕹️ 
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged istioinaction-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I1019 08:33:46.378882     138 initconfiguration.go:254] loading configuration from "/kind/kubeadm.conf"
W1019 08:33:46.380591     138 initconfiguration.go:331] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.25.2
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1019 08:33:46.385114     138 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1019 08:33:46.764714     138 certs.go:522] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [istioinaction-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.22.0.6 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1019 08:33:47.032475     138 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1019 08:33:47.115572     138 certs.go:522] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1019 08:33:47.229402     138 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1019 08:33:47.315526     138 certs.go:522] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [istioinaction-control-plane localhost] and IPs [172.22.0.6 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [istioinaction-control-plane localhost] and IPs [172.22.0.6 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1019 08:33:48.183185     138 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1019 08:33:48.270474     138 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1019 08:33:48.365006     138 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1019 08:33:48.609181     138 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1019 08:33:48.680612     138 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I1019 08:33:48.797441     138 kubelet.go:66] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1019 08:33:48.957010     138 manifests.go:99] [control-plane] getting StaticPodSpecs
I1019 08:33:48.957892     138 certs.go:522] validating certificate period for CA certificate
I1019 08:33:48.958200     138 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1019 08:33:48.958243     138 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I1019 08:33:48.958267     138 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1019 08:33:48.958291     138 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I1019 08:33:48.958316     138 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1019 08:33:48.963858     138 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I1019 08:33:48.963888     138 manifests.go:99] [control-plane] getting StaticPodSpecs
I1019 08:33:48.964318     138 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1019 08:33:48.964339     138 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I1019 08:33:48.964361     138 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1019 08:33:48.964365     138 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1019 08:33:48.964368     138 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1019 08:33:48.964372     138 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I1019 08:33:48.964376     138 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1019 08:33:48.965003     138 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I1019 08:33:48.965013     138 manifests.go:99] [control-plane] getting StaticPodSpecs
I1019 08:33:48.965185     138 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1019 08:33:48.965549     138 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1019 08:33:48.966115     138 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1019 08:33:48.966125     138 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I1019 08:33:48.966576     138 loader.go:374] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1019 08:33:48.967526     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:33:49.470031     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:49.971225     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:50.470105     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:50.970958     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:51.470700     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:51.971168     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:52.468094     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:33:52.969972     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:53.470912     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:53.970151     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:54.469538     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:54.971224     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:55.468324     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:33:55.971215     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:56.469818     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:56.970654     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:57.470793     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:33:57.970709     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:58.469536     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:33:58.971352     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:59.471438     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:33:59.970232     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:00.468352     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:00.971337     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:01.471119     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:01.971130     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:02.470937     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:02.970944     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:03.468697     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:03.970635     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:04.471243     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:04.970022     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:05.470170     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:05.971090     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:06.469021     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:06.971437     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:07.470188     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:07.969655     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:08.470585     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:08.971150     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:09.470948     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:09.968512     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:10.471091     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:10.970950     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:11.470772     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:11.970641     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:12.471121     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:12.970796     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:13.469657     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:13.970755     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:14.469647     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:14.971277     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:15.468415     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:15.971184     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:16.470100     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:16.971240     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:17.470118     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:17.971169     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:18.470146     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:18.971121     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:19.469232     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:19.970861     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:20.470663     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:20.971253     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:21.470264     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:21.971420     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:22.471166     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:22.970702     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:23.471066     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:23.969758     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:24.470674     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:24.969647     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:25.470281     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:25.971221     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:26.471081     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:26.969568     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:27.471201     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:27.970608     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:28.470529     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
[kubelet-check] Initial timeout of 40s passed.
I1019 08:34:28.970462     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1019 08:34:29.470694     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:29.970146     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:30.471298     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:30.970661     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:31.468498     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:31.971122     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:32.471069     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:32.971823     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:33.469308     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:33.971161     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1019 08:34:34.469059     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:34.970983     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:35.470859     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:35.969190     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:36.471134     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:36.971332     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:37.471213     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:37.969178     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:38.470475     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:38.971128     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:39.471124     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:39.969997     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:40.470927     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:40.969726     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:41.469485     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:41.969559     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:42.471258     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:42.970661     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:43.471101     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:43.971038     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1019 08:34:44.469265     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:44.971304     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:45.469225     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:45.971083     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:46.471025     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:46.970943     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:47.470816     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:47.970701     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:48.469560     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:48.969713     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:49.471189     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:49.970883     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:50.470915     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:50.969179     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:51.470295     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:51.971203     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:52.470992     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:52.969357     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:53.470621     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:34:53.971002     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:54.471050     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:54.970680     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:55.469107     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:55.970938     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:56.469391     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:56.971271     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:57.471240     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:57.968990     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:58.470977     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:58.969104     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:59.470192     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:34:59.970201     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:00.471134     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:00.970031     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:01.471235     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:01.971361     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:02.471344     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:02.971093     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:03.471077     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:03.969673     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1019 08:35:04.470295     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:04.971075     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:05.470176     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:05.970076     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:06.469154     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:06.971025     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:07.470573     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:07.971373     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:08.471293     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:08.971217     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:09.470973     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:09.970868     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:10.470761     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:10.969784     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:11.468420     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:11.970965     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:12.471031     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:12.969764     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:13.470599     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:13.970545     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:14.471050     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:14.969166     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:15.470137     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:15.969122     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:16.469474     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:16.970247     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:17.470161     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:17.971104     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:18.471212     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:18.971197     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:19.470211     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:19.968877     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:20.470799     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:20.968647     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:21.470791     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:21.969206     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:22.469942     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:22.970075     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:23.471114     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:23.969145     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:24.471179     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:24.969603     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:25.471045     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:25.970841     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:26.468113     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:26.970162     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:27.469747     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:27.971270     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:28.468833     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:28.970914     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:29.470726     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:29.969398     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:30.471243     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:30.971009     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:31.470992     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:31.970269     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:32.470831     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:32.969406     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:33.471313     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:33.971228     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:34.471121     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:34.970751     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:35.470733     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:35.969370     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:36.471368     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:36.970203     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:37.471144     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:37.971011     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:38.470719     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 08:35:38.969197     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:39.469199     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:39.970471     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:40.471170     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:40.971100     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:41.470752     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:41.970822     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:42.470876     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:42.970803     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:43.470628     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 08:35:43.971096     138 round_trippers.go:553] GET https://istioinaction-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:154
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1594
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:154
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1594
Exporting logs for cluster "istioinaction" to:
/tmp/1990444523
…/istioinaction_cluster took 2m1s❯ 
@dxps dxps added the kind/bug Categorizes issue or PR as related to a bug. label Oct 19, 2022
@aojea
Copy link
Contributor

aojea commented Oct 19, 2022

it is possible that your host is becoming constrained with both clusters and it causes the second to be much slower and fail?

@dxps
Copy link
Author

dxps commented Oct 19, 2022

Nope, since it fails pretty fast, at least in 1st case of failure. The 2nd one, after restart looks totally different.

The system is a laptop that has enough CPU (8 cores) and RAM (32 GM), with a fast (SSD) disk.
First time (yesterday, when I reported on the previous issue, mentioned in the beginning it happened on the workstation as well, which is even more powerful.

@BenTheElder
Copy link
Member

Your system may have lots of physical resources but it can still become constrained on kernel limit dimensions like number of open files, number of inotify watches etc.

Is this this most minimal cluster sizes and actions that produces this result?

@dxps
Copy link
Author

dxps commented Oct 19, 2022

Generally speaking, sure. Different logical limits may be reached.

Not sure what's specific to this system, as in the other system the max num of open files (aka ulimit -n) is still 1024, and generally it has more apps open.

To get into concrete and some actionable facts, the number of open files case is not applicable anymore.
And searching for error entries (in a hopefully accurate enough manner) here are the results found in that export location:

Error level entries
…/1990444523 ❯ grep -Ri "level=error" *
istioinaction-control-plane/journal.log:Oct 19 08:33:45 istioinaction-control-plane containerd[105]: time="2022-10-19T08:33:45.617670890Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
istioinaction-control-plane/journal.log:Oct 19 08:33:45 istioinaction-control-plane containerd[105]: time="2022-10-19T08:33:45.618582558Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
istioinaction-control-plane/containerd.log:Oct 19 08:33:45 istioinaction-control-plane containerd[105]: time="2022-10-19T08:33:45.617670890Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
istioinaction-control-plane/containerd.log:Oct 19 08:33:45 istioinaction-control-plane containerd[105]: time="2022-10-19T08:33:45.618582558Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
istioinaction-worker/journal.log:Oct 19 08:33:45 istioinaction-worker containerd[105]: time="2022-10-19T08:33:45.578211772Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
istioinaction-worker/journal.log:Oct 19 08:33:45 istioinaction-worker containerd[105]: time="2022-10-19T08:33:45.579058794Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
istioinaction-worker/containerd.log:Oct 19 08:33:45 istioinaction-worker containerd[105]: time="2022-10-19T08:33:45.578211772Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
istioinaction-worker/containerd.log:Oct 19 08:33:45 istioinaction-worker containerd[105]: time="2022-10-19T08:33:45.579058794Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
istioinaction-worker2/journal.log:Oct 19 08:33:45 istioinaction-worker2 containerd[105]: time="2022-10-19T08:33:45.620740099Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
istioinaction-worker2/journal.log:Oct 19 08:33:45 istioinaction-worker2 containerd[105]: time="2022-10-19T08:33:45.621716545Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
istioinaction-worker2/containerd.log:Oct 19 08:33:45 istioinaction-worker2 containerd[105]: time="2022-10-19T08:33:45.620740099Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
istioinaction-worker2/containerd.log:Oct 19 08:33:45 istioinaction-worker2 containerd[105]: time="2022-10-19T08:33:45.621716545Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
…/1990444523 ❯

What else should I do for investigating this?

Meanwhile, ran these tests:


Test 1: Another cluster as the 1st one (2-worker-node cluster) - NOK

Creating another cluster as the 1st one (a 2-worker-node cluster, without any ingress) failed as well.
```yaml
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster

nodes:
- role: control-plane
- role: worker
- role: worker
```
Test 1 output
…/istioinaction_cluster❯ kind create cluster --name test-same-cluster --config 2workers_kind_cluster --retain
Creating cluster "test-same-cluster" ...
 ✓ Ensuring node image (kindest/node:v1.25.2) 🖼
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✗ Starting control-plane 🕹️ 
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged test-same-cluster-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I1019 15:45:57.462473     137 initconfiguration.go:254] loading configuration from "/kind/kubeadm.conf"
W1019 15:45:57.464177     137 initconfiguration.go:331] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.25.2

... (content omitted) ...

I1019 15:45:59.739224     137 loader.go:374] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1019 15:45:59.740158     137 round_trippers.go:553] GET https://test-same-cluster-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I1019 15:46:00.243205     137 round_trippers.go:553] GET https://test-same-cluster-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
...
I1019 15:46:39.243528     137 round_trippers.go:553] GET https://test-same-cluster-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] Initial timeout of 40s passed.
I1019 15:46:39.742480     137 round_trippers.go:553] GET https://test-same-cluster-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1019 15:46:40.242046     137 round_trippers.go:553] GET https://test-same-cluster-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I1019 15:46:40.743596     137 round_trippers.go:553] GET https://test-same-cluster-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
...
I1019 15:47:54.742625     137 round_trippers.go:553] GET https://test-same-cluster-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
...
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1594
…/istioinaction_cluster took 2m❯

It seems that it's timing out, waiting for the control plane components to start.

docker ps shows all three containers running:

CONTAINER ID   NAMES                             IMAGE                  STATUS         PORTS
9dac2f8b3eea   test-same-cluster-worker          kindest/node:v1.25.2   Up 7 minutes   
eef7a36a6f59   test-same-cluster-control-plane   kindest/node:v1.25.2   Up 7 minutes   127.0.0.1:41941->6443/tcp
baaece333ca6   test-same-cluster-worker2         kindest/node:v1.25.2   Up 7 minutes   

But indeed, a ps into it shows very few processes:

…/istioinaction_cluster ❯ docker exec -it test-same-cluster-control-plane ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 15:45 ?        00:00:03 /sbin/init
root          92       1  0 15:45 ?        00:00:02 /lib/systemd/systemd-journald
root         105       1  0 15:45 ?        00:00:05 /usr/local/bin/containerd
root       15697       0  0 15:57 pts/1    00:00:00 ps -ef
…/istioinaction_cluster ❯

compared with the 1st cluster (that was created just fine):

A fully functional control-plane container processes
…/istioinaction_cluster ❯ docker exec -it dxps-cluster-control-plane ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 08:28 ?        00:00:00 /sbin/init
root          92       1  0 08:28 ?        00:00:00 /lib/systemd/systemd-journal
root         105       1  0 08:28 ?        00:00:23 /usr/local/bin/containerd
root         339       1  0 08:28 ?        00:00:00 /usr/local/bin/containerd-sh
root         340       1  0 08:28 ?        00:00:00 /usr/local/bin/containerd-sh
root         381       1  0 08:28 ?        00:00:00 /usr/local/bin/containerd-sh
root         399       1  0 08:28 ?        00:00:00 /usr/local/bin/containerd-sh
65535        427     340  0 08:28 ?        00:00:00 /pause
65535        434     339  0 08:28 ?        00:00:00 /pause
65535        440     399  0 08:28 ?        00:00:00 /pause
65535        448     381  0 08:28 ?        00:00:00 /pause
root         520     399  0 08:28 ?        00:00:09 kube-scheduler --authenticat
root         584     381  0 08:28 ?        00:00:49 kube-controller-manager --al
root         585     339  0 08:28 ?        00:02:03 kube-apiserver --advertise-a
root         667     340  0 08:28 ?        00:01:06 etcd --advertise-client-urls
root         736       1  0 08:28 ?        00:01:06 /usr/bin/kubelet --bootstrap
root         850       1  0 08:29 ?        00:00:00 /usr/local/bin/containerd-sh
root         872       1  0 08:29 ?        00:00:00 /usr/local/bin/containerd-sh
65535        895     850  0 08:29 ?        00:00:00 /pause
65535        902     872  0 08:29 ?        00:00:00 /pause
root         943     850  0 08:29 ?        00:00:00 /usr/local/bin/kube-proxy --
root         980     872  0 08:29 ?        00:00:00 /bin/kindnetd
root        1248       1  0 08:29 ?        00:00:00 /usr/local/bin/containerd-sh
root        1249       1  0 08:29 ?        00:00:00 /usr/local/bin/containerd-sh
65535       1288    1248  0 08:29 ?        00:00:00 /pause
65535       1295    1249  0 08:29 ?        00:00:00 /pause
root        1359       1  0 08:29 ?        00:00:00 /usr/local/bin/containerd-sh
65535       1378    1359  0 08:29 ?        00:00:00 /pause
root        1429    1249  0 08:29 ?        00:00:07 /coredns -conf /etc/coredns/
root        1438    1248  0 08:29 ?        00:00:06 /coredns -conf /etc/coredns/
root        1513    1359  0 08:29 ?        00:00:01 local-path-provisioner --deb
root        2028       0  0 15:57 pts/1    00:00:00 ps -ef
…/istioinaction_cluster ❯

Test 2: A simple cluster - OK

Again, deleted the 2nd (and failed to be properly created) cluster, after that kind delete cluster --name ... the containers dissappeared correctly.

Creating a simple (one single node, the control plane one) cluster using kind create cluster --name simple-cluster worked.


Test 3: A 1-worker-node - NOK

Testing a smaller setup with two nodes: one control-plane and one worker (compared with the test 1, this time we have one worker node instead of two).

In this case, it fails later, timing out while waiting for the worker node to join.

Test 3 output
…/istioinaction_cluster ❯ kind create cluster --name one-worker-cluster --config 1worker_kind_cluster --retain
Creating cluster "one-worker-cluster" ...
 ✓ Ensuring node image (kindest/node:v1.25.2) 🖼
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✗ Joining worker nodes 🚜 
ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged one-worker-cluster-worker kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1
Command Output: I1019 16:07:05.985642     137 join.go:416] [preflight] found NodeName empty; using OS hostname as NodeName
I1019 16:07:05.985672     137 joinconfiguration.go:76] loading configuration from "/kind/kubeadm.conf"

...

I1019 16:07:28.574547     137 kubelet.go:219] [kubelet-start] preserving the crisocket information for the node
I1019 16:07:28.574601     137 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "one-worker-cluster-worker" as an annotation
I1019 16:07:29.081678     137 round_trippers.go:553] GET https://one-worker-cluster-control-plane:6443/api/v1/nodes/one-worker-cluster-worker?timeout=10s 404 Not Found in 5 milliseconds
I1019 16:07:29.577307     137 round_trippers.go:553] GET https://one-worker-cluster-control-plane:6443/api/v1/nodes/one-worker-cluster-worker?timeout=10s 404 Not Found in 1 milliseconds

...

I1019 16:07:58.077944     137 round_trippers.go:553] GET https://one-worker-cluster-control-plane:6443/api/v1/nodes/one-worker-cluster-worker?timeout=10s 404 Not Found in 1 milliseconds
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1019 16:07:58.579523     137 round_trippers.go:553] GET https://one-worker-cluster-control-plane:6443/api/v1/nodes/one-worker-cluster-worker?timeout=10s 404 Not Found in 4 milliseconds

...

I1019 16:09:28.582278     137 round_trippers.go:553] GET https://one-worker-cluster-control-plane:6443/api/v1/nodes/one-worker-cluster-worker?timeout=10s 404 Not Found in 2 milliseconds
nodes "one-worker-cluster-worker" not found
error uploading crisocket
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runKubeletStartJoinPhase
	cmd/kubeadm/app/cmd/phases/join/kubelet.go:221

...

runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1594
…/istioinaction_cluster took 2m40s❯ 

As before, comparing the processes that are running in a functional worker node with this newer (but failed to join) one, there are differences.

worker node processes differences

…/istioinaction_cluster ❯ docker exec -it dxps-cluster-worker ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 08:28 ? 00:00:00 /sbin/init
root 92 1 0 08:28 ? 00:00:00 /lib/systemd/systemd-journal
root 105 1 0 08:28 ? 00:00:23 /usr/local/bin/containerd
root 262 1 0 08:29 ? 00:00:45 /usr/bin/kubelet --bootstrap
root 330 1 0 08:29 ? 00:00:00 /usr/local/bin/containerd-sh
root 358 1 0 08:29 ? 00:00:01 /usr/local/bin/containerd-sh
65535 378 330 0 08:29 ? 00:00:00 /pause
65535 381 358 0 08:29 ? 00:00:00 /pause
root 433 358 0 08:29 ? 00:00:00 /usr/local/bin/kube-proxy --
root 578 330 0 08:29 ? 00:00:01 /bin/kindnetd
root 1334 0 0 16:17 pts/1 00:00:00 ps -ef
…/istioinaction_cluster ❯
…/istioinaction_cluster ❯ docker exec -it one-worker-cluster-worker ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 16:06 ? 00:00:02 /sbin/init
root 92 1 0 16:06 ? 00:00:02 /lib/systemd/systemd-journal
root 105 1 0 16:06 ? 00:00:04 /usr/local/bin/containerd
root 13364 0 0 16:17 pts/1 00:00:00 ps -ef
…/istioinaction_cluster ❯


Hope it helps.

@BenTheElder
Copy link
Member

Are you able to share the rest of the cluster logs (kind export logs) when see this error message?
Github will accept a zip or tar/tar.gz archive uploaded to a comment.

@dxps
Copy link
Author

dxps commented Oct 20, 2022

Absolutely, Ben! You guys are so "kind" to help me, I really want to use KinD instead of the alternatives, so yeah.
And indeed I was wondering if I could attach an archive of those logs to such a comment.
Apparently, yes, with a drag-n-drop. 🤦 That's cool!


I had to reproduce it again, since I did a clean up last night. Interestingly is that if I initially create a simple 3-worker nodes cluster and then the classic 1-node cluster (kind create cluster --name default), this 2nd one is also successfully created.

Deleting this 2nd one, and creating a 1-worker node cluster fails as before (presented on Test 3 above).

Relevant output
…/istioinaction_cluster ❯ kind create cluster --name oneworker --config 1worker_kind_cluster --retain ; kind export logs --name=oneworker
Creating cluster "oneworker" ...
 ✓ Ensuring node image (kindest/node:v1.25.2) 🖼
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✗ Joining worker nodes 🚜 
ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged oneworker-worker kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1
Command Output: I1020 17:45:53.106540     139 join.go:416] [preflight] found NodeName empty; using OS hostname as NodeName
I1020 17:45:53.106567     139 joinconfiguration.go:76] loading configuration from "/kind/kubeadm.conf"
I1020 17:45:53.107116     139 controlplaneprepare.go:220] [download-certs] Skipping certs download
I1020 17:45:53.107121     139 join.go:533] [preflight] Discovering cluster-info
I1020 17:45:53.107134     139 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "oneworker-control-plane:6443"
I1020 17:45:53.110783     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 3 milliseconds
I1020 17:45:53.110969     139 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I1020 17:45:59.045697     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 1 milliseconds
I1020 17:45:59.045849     139 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I1020 17:46:05.485438     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 1 milliseconds
I1020 17:46:05.486615     139 token.go:105] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "oneworker-control-plane:6443"
I1020 17:46:05.486624     139 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I1020 17:46:05.486634     139 join.go:547] [preflight] Fetching init configuration
I1020 17:46:05.486639     139 join.go:593] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I1020 17:46:05.490973     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 4 milliseconds
I1020 17:46:05.492500     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy?timeout=10s 200 OK in 0 milliseconds
I1020 17:46:05.493402     139 kubelet.go:74] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config"
I1020 17:46:05.494312     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config?timeout=10s 200 OK in 0 milliseconds
I1020 17:46:05.495577     139 interface.go:432] Looking for default routes with IPv4 addresses
I1020 17:46:05.495582     139 interface.go:437] Default route transits interface "eth0"
I1020 17:46:05.495643     139 interface.go:209] Interface eth0 is up
I1020 17:46:05.495679     139 interface.go:257] Interface "eth0" has 3 addresses :[172.22.0.6/16 fc00:f853:ccd:e793::6/64 fe80::42:acff:fe16:6/64].
I1020 17:46:05.495697     139 interface.go:224] Checking addr  172.22.0.6/16.
I1020 17:46:05.495702     139 interface.go:231] IP found 172.22.0.6
I1020 17:46:05.495709     139 interface.go:263] Found valid IPv4 address 172.22.0.6 for interface "eth0".
I1020 17:46:05.495712     139 interface.go:443] Found active IP 172.22.0.6 
I1020 17:46:05.499512     139 kubelet.go:120] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I1020 17:46:05.499994     139 kubelet.go:135] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I1020 17:46:05.500269     139 loader.go:374] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf
I1020 17:46:05.500465     139 kubelet.go:156] [kubelet-start] Checking for an existing Node in the cluster with name "oneworker-worker" and status "Ready"
I1020 17:46:05.501613     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/nodes/oneworker-worker?timeout=10s 404 Not Found in 1 milliseconds
I1020 17:46:05.501775     139 kubelet.go:171] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I1020 17:46:10.658421     139 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1020 17:46:10.659462     139 cert_rotation.go:137] Starting client certificate rotation controller
I1020 17:46:10.659911     139 loader.go:374] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1020 17:46:10.660124     139 kubelet.go:219] [kubelet-start] preserving the crisocket information for the node
I1020 17:46:10.660142     139 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "oneworker-worker" as an annotation
I1020 17:46:11.164704     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/nodes/oneworker-worker?timeout=10s 404 Not Found in 4 milliseconds
I1020 17:46:11.662697     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/nodes/oneworker-worker?timeout=10s 404 Not Found in 2 milliseconds
I1020 17:46:12.166844     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/nodes/oneworker-worker?timeout=10s 404 Not Found in 4 milliseconds

...

I1020 17:46:45.163304     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/nodes/oneworker-worker?timeout=10s 404 Not Found in 1 milliseconds
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I1020 17:46:45.665926     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/nodes/oneworker-worker?timeout=10s 404 Not Found in 4 milliseconds
I1020 17:46:46.167110     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/nodes/oneworker-worker?timeout=10s 404 Not Found in 4 milliseconds

...
I1020 17:48:10.662571     139 round_trippers.go:553] GET https://oneworker-control-plane:6443/api/v1/nodes/oneworker-worker?timeout=10s 404 Not Found in 0 milliseconds
nodes "oneworker-worker" not found
error uploading crisocket
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runKubeletStartJoinPhase
	cmd/kubeadm/app/cmd/phases/join/kubelet.go:221
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
	cmd/kubeadm/app/cmd/join.go:181
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1594
error execution phase kubelet-start
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
	cmd/kubeadm/app/cmd/join.go:181
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1594
Exporting logs for cluster "oneworker" to:
/tmp/2171614989
…/istioinaction_cluster took 2m33s❯

And here is the archive with the kind exported logs.

2171614989_oneworker_kind_logs.tar.gz

@BenTheElder
Copy link
Member

BenTheElder commented Oct 26, 2022

Sorry this has been unfortunate timing for me, looking at the logs now for kubelet on the worker node:

Oct 20 17:48:10 oneworker-worker kubelet[2848]: E1020 17:48:10.765486 2848 kubelet.go:1380] "Failed to start cAdvisor" err="inotify_init: too many open files"

You're now hitting inotify limits, a variation on:
https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files

@dxps
Copy link
Author

dxps commented Oct 31, 2022

Thanks, Benjamin Elder! Appreciate the feedback!
I'll try to recheck that, although I addressed that part before, and nowadays I started using k3d and have no issue at all.
But sure, I'll review that and do a test, at least for others to quickly find their solution to a similar case like this.

@lestaat
Copy link

lestaat commented Nov 7, 2022

Reporting the same issue on Mac M1 w/ DOCKER_DEFAULT_PLATFORM=linux/amd64

ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"

Stack Trace: sigs.k8s.io/kind/pkg/errors.Errorf sigs.k8s.io/kind/pkg/errors/errors.go:41 sigs.k8s.io/kind/pkg/cluster/internal/providers/common.WaitUntilLogRegexpMatches sigs.k8s.io/kind/pkg/cluster/internal/providers/common/cgroups.go:84 sigs.k8s.io/kind/pkg/cluster/internal/providers/docker.createContainerWithWaitUntilSystemdReachesMultiUserSystem sigs.k8s.io/kind/pkg/cluster/internal/providers/docker/provision.go:407 sigs.k8s.io/kind/pkg/cluster/internal/providers/docker.planCreation.func2 sigs.k8s.io/kind/pkg/cluster/internal/providers/docker/provision.go:115 sigs.k8s.io/kind/pkg/errors.UntilErrorConcurrent.func1 sigs.k8s.io/kind/pkg/errors/concurrent.go:30 runtime.goexit runtime/asm_arm64.s:1172

@BenTheElder
Copy link
Member

BenTheElder commented Nov 7, 2022

Reporting the same issue on Mac M1 w/ DOCKER_DEFAULT_PLATFORM=linux/amd64

#2718

TLDR that's not supported, the platform needs to match to run Kubernetes. We need to determine the best option to handle this still.

@notjames
Copy link

notjames commented Dec 20, 2022

Also running into the same issue on Mac M1:

my command:

kind create cluster --config local-k8s-tests/bootstrap-k8s/kind.yaml --retain

the error:

at 14:22:23 ❯ kind create cluster --config local-k8s-tests/bootstrap-k8s/kind.yaml --retain #; kind export logs --name jimconn; kind delete cluster --name jimconn
Creating cluster "jimconn" ...
 ✓ Ensuring node image (kindest/node:v1.23.13) 🖼
 ✗ Preparing nodes 📦
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"

my config:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: jimconn
featureGates: {}
networking:
  ipFamily: ipv4
nodes:
  - role: control-plane
    image: kindest/node:v1.23.13@sha256:e7968cda1b4ff790d5b0b5b0c29bda0404cdb825fd939fe50fd5accc43e3a730

Interesting items of note:
I updated my config to run a 1.23 cluster vs a 1.21 cluster. A 1.21 cluster comes up fine. 1.23 and 1.25 cluster images fail with the same error as noted above.

image was found here

kind version: v0.14.0 go1.18.2 darwin/arm64

kind-issue-2972-jimconn-log-export.tar.gz

@notjames
Copy link

notjames commented Dec 20, 2022

Also running into the same issue on Mac M1:

my command:

kind create cluster --config local-k8s-tests/bootstrap-k8s/kind.yaml --retain

the error:

at 14:22:23 ❯ kind create cluster --config local-k8s-tests/bootstrap-k8s/kind.yaml --retain #; kind export logs --name jimconn; kind delete cluster --name jimconn
Creating cluster "jimconn" ...
 ✓ Ensuring node image (kindest/node:v1.23.13) 🖼
 ✗ Preparing nodes 📦
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"

my config:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: jimconn
featureGates: {}
networking:
  ipFamily: ipv4
nodes:
  - role: control-plane
    image: kindest/node:v1.23.13@sha256:e7968cda1b4ff790d5b0b5b0c29bda0404cdb825fd939fe50fd5accc43e3a730

Interesting items of note: I updated my config to run a 1.23 cluster vs a 1.21 cluster. A 1.21 cluster comes up fine. 1.23 and 1.25 cluster images fail with the same error as noted above.

image was found here

kind version: v0.14.0 go1.18.2 darwin/arm64

kind-issue-2972-jimconn-log-export.tar.gz

Ah geez. I just realized that I was using the amd64 image instead of the arm64 image, which is easy to do. I missed the drop-down menu in the docker link above. Selecting the correct version and updating the sha in my config solved my issue.

Rather than deleting this post, I'll leave it in case anyone else runs into the same issue I did. The proper link I should have used is here.

@BenTheElder
Copy link
Member

Summarizing:

@BenTheElder
Copy link
Member

NOTE: we publish the multi-arch digests in our release notes, so you can use those instead of the docker hub digests. Docker hub's UI only exposes single-arch digests, but there is a digest for the multi-arch manifest as well.

@BenTheElder BenTheElder self-assigned this Dec 20, 2022
@BenTheElder BenTheElder added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Dec 20, 2022
@notjames
Copy link

Reporting the same issue on Mac M1 w/ DOCKER_DEFAULT_PLATFORM=linux/amd64

#2718

TLDR that's not supported, the platform needs to match to run Kubernetes. We need to determine the best option to handle this still.

FTR, mine wasn't the same as @dxps because I wasn't trying to use DOCKER_DEFAULT_PLATFORM. Mine was just user error.

1 similar comment
@notjames
Copy link

Reporting the same issue on Mac M1 w/ DOCKER_DEFAULT_PLATFORM=linux/amd64

#2718

TLDR that's not supported, the platform needs to match to run Kubernetes. We need to determine the best option to handle this still.

FTR, mine wasn't the same as @dxps because I wasn't trying to use DOCKER_DEFAULT_PLATFORM. Mine was just user error.

@chrkuznos1
Copy link

chrkuznos1 commented Jul 4, 2024

Ok Its an old issue , yet since, I faced today the same issue as @notjames and the solution was to run the kind create command as root (sudo...) and it worked and the creation completed, but the cluster now is created in root, so all the time i have to execute command as root

@BenTheElder
Copy link
Member

If you're trying to run as non-root, please see the docs: https://kind.sigs.k8s.io/docs/user/rootless/

rootless containers are still a bit "fun" but kind mostly works, if you take some additional steps outlined in the docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

6 participants