Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with kubeadm init --config xxx.yaml #46015

Closed
1 task
xpi opened this issue Apr 25, 2024 · 6 comments
Closed
1 task

Issue with kubeadm init --config xxx.yaml #46015

xpi opened this issue Apr 25, 2024 · 6 comments
Labels
kind/support Categorizes issue or PR as a support question. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@xpi
Copy link

xpi commented Apr 25, 2024

[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.500755534s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.000168714s

Unfortunately, an error has occurred:
context deadline exceeded

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

my contiainerd config:

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
        BinaryName = ""
        CriuImagePath = ""
        CriuPath = ""
        CriuWorkPath = ""
        IoGid = 0
        IoUid = 0
        NoNewKeyring = false
        NoPivotRoot = false
        Root = ""
        ShimCgroup = ""
        SystemdCgroup = true

and kubeadm init config:

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

Tasks

@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

SIG Docs takes a lead on issue triage for this website, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Apr 25, 2024
@xpi xpi changed the title Issue with k8s.io/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/ Issue with kubeadm init --config xxx.yaml Apr 25, 2024
@xpi
Copy link
Author

xpi commented Apr 25, 2024

systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Thu 2024-04-25 15:20:21 CST; 7min ago
Docs: https://kubernetes.io/docs/
Main PID: 527382 (kubelet)
Tasks: 12 (limit: 4385)
Memory: 25.6M
CPU: 3.477s
CGroup: /system.slice/kubelet.service
└─527382 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime>

Apr 25 15:27:49 kube-master-node-1 kubelet[527382]: E0425 15:27:49.159911 527382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.Runt>
Apr 25 15:27:49 kube-master-node-1 kubelet[527382]: E0425 15:27:49.490327 527382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post "https://172.18.249.131:6443/api/v1/namesp>
Apr 25 15:27:49 kube-master-node-1 kubelet[527382]: E0425 15:27:49.612522 527382 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate >
Apr 25 15:27:51 kube-master-node-1 kubelet[527382]: E0425 15:27:51.546905 527382 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node "kube-mas>
Apr 25 15:27:53 kube-master-node-1 kubelet[527382]: W0425 15:27:53.850788 527382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.18.249.131:6443>
Apr 25 15:27:53 kube-master-node-1 kubelet[527382]: E0425 15:27:53.850875 527382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "h>
Apr 25 15:27:55 kube-master-node-1 kubelet[527382]: E0425 15:27:55.100405 527382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get "https://172.18.249.131:6443/apis/coordinatio>
Apr 25 15:27:55 kube-master-node-1 kubelet[527382]: I0425 15:27:55.324681 527382 kubelet_node_status.go:73] "Attempting to register node" node="kube-master-node-1"
Apr 25 15:27:55 kube-master-node-1 kubelet[527382]: E0425 15:27:55.325011 527382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post "https://172.18.249.131:6443/api/v1/no>
Apr 25 15:27:59 kube-master-node-1 kubelet[527382]: E0425 15:27:59.491518 527382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post "https://172.18.249.131:6443/api/v1/namesp>

@xpi
Copy link
Author

xpi commented Apr 25, 2024

i had set cgroupDriver: systemd in /var/lib/kubelet/config.yaml

@neolit123
Copy link
Member

for kubelet logs you can also use jornalctl -xeu kubelet
but github is not the right place to get help.
please see https://github.com/kubernetes/kubernetes/blob/master/SUPPORT.md

/kind support
/close

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Apr 25, 2024
@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

for kubelet logs you can also use jornalctl -xeu kubelet
but github is not the right place to get help.
please see https://github.com/kubernetes/kubernetes/blob/master/SUPPORT.md

/kind support
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@xpi
Copy link
Author

xpi commented May 7, 2024

here is the tail of ' 'journalctl -xeu kubelet '
May 07 14:01:53 kube-woker-node-1 kubelet[2149]: E0507 14:01:53.994153 2149 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node "kube-woker-node-1" not found"
May 07 14:01:54 kube-woker-node-1 kubelet[2149]: E0507 14:01:54.654737 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get "https://192.168.2.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kube-woker-node-1?timeout=10s\": dial tcp 192.168.2.233:6443: connect: connection refused" interval="7s"
May 07 14:01:54 kube-woker-node-1 kubelet[2149]: E0507 14:01:54.795181 2149 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image "registry.k8s.io/pause:3.9": failed to pull image "registry.k8s.io/pause:3.9": failed to pull and unpack image "registry.k8s.io/pause:3.9": failed to resolve reference "registry.k8s.io/pause:3.9": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\": dial tcp 142.251.170.82:443: i/o timeout"
May 07 14:01:54 kube-woker-node-1 kubelet[2149]: E0507 14:01:54.795632 2149 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image "registry.k8s.io/pause:3.9": failed to pull image "registry.k8s.io/pause:3.9": failed to pull and unpack image "registry.k8s.io/pause:3.9": failed to resolve reference "registry.k8s.io/pause:3.9": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\": dial tcp 142.251.170.82:443: i/o timeout" pod="kube-system/kube-scheduler-kube-woker-node-1"
May 07 14:01:54 kube-woker-node-1 kubelet[2149]: E0507 14:01:54.796150 2149 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image "registry.k8s.io/pause:3.9": failed to pull image "registry.k8s.io/pause:3.9": failed to pull and unpack image "registry.k8s.io/pause:3.9": failed to resolve reference "registry.k8s.io/pause:3.9": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\": dial tcp 142.251.170.82:443: i/o timeout" pod="kube-system/kube-scheduler-kube-woker-node-1"
May 07 14:01:54 kube-woker-node-1 kubelet[2149]: E0507 14:01:54.796627 2149 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "kube-scheduler-kube-woker-node-1_kube-system(0d15fb3f6bb718e308af07508dd11fd0)" with CreatePodSandboxError: "Failed to create sandbox for pod \"kube-scheduler-kube-woker-node-1_kube-system(0d15fb3f6bb718e308af07508dd11fd0)\": rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.9\": failed to pull image \"registry.k8s.io/pause:3.9\": failed to pull and unpack image \"registry.k8s.io/pause:3.9\": failed to resolve reference \"registry.k8s.io/pause:3.9\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\\\": dial tcp 142.251.170.82:443: i/o timeout"" pod="kube-system/kube-scheduler-kube-woker-node-1" podUID="0d15fb3f6bb718e308af07508dd11fd0"
May 07 14:01:55 kube-woker-node-1 kubelet[2149]: I0507 14:01:55.219469 2149 kubelet_node_status.go:73] "Attempting to register node" node="kube-woker-node-1"
May 07 14:01:55 kube-woker-node-1 kubelet[2149]: E0507 14:01:55.221811 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post "https://192.168.2.233:6443/api/v1/nodes\": dial tcp 192.168.2.233:6443: connect: connection refused" node="kube-woker-node-1"
May 07 14:01:58 kube-woker-node-1 kubelet[2149]: E0507 14:01:58.597793 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post "https://192.168.2.233:6443/api/v1/namespaces/default/events\": dial tcp 192.168.2.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-woker-node-1.17cd39512da5ff14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kube-woker-node-1,UID:kube-woker-node-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node kube-woker-node-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:kube-woker-node-1,},FirstTimestamp:2024-05-07 13:54:13.919014676 +0000 UTC m=+2.761196637,LastTimestamp:2024-05-07 13:54:13.919014676 +0000 UTC m=+2.761196637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kube-woker-node-1,}"
May 07 14:01:59 kube-woker-node-1 kubelet[2149]: E0507 14:01:59.521652 2149 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image "registry.k8s.io/pause:3.9": failed to pull image "registry.k8s.io/pause:3.9": failed to pull and unpack image "registry.k8s.io/pause:3.9": failed to resolve reference "registry.k8s.io/pause:3.9": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\": dial tcp 142.251.170.82:443: i/o timeout"
May 07 14:01:59 kube-woker-node-1 kubelet[2149]: E0507 14:01:59.521988 2149 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image "registry.k8s.io/pause:3.9": failed to pull image "registry.k8s.io/pause:3.9": failed to pull and unpack image "registry.k8s.io/pause:3.9": failed to resolve reference "registry.k8s.io/pause:3.9": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\": dial tcp 142.251.170.82:443: i/o timeout" pod="kube-system/etcd-kube-woker-node-1"
May 07 14:01:59 kube-woker-node-1 kubelet[2149]: E0507 14:01:59.522225 2149 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image "registry.k8s.io/pause:3.9": failed to pull image "registry.k8s.io/pause:3.9": failed to pull and unpack image "registry.k8s.io/pause:3.9": failed to resolve reference "registry.k8s.io/pause:3.9": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\": dial tcp 142.251.170.82:443: i/o timeout" pod="kube-system/etcd-kube-woker-node-1"
May 07 14:01:59 kube-woker-node-1 kubelet[2149]: E0507 14:01:59.522583 2149 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "etcd-kube-woker-node-1_kube-system(063f61239034c55105879a566c975931)" with CreatePodSandboxError: "Failed to create sandbox for pod \"etcd-kube-woker-node-1_kube-system(063f61239034c55105879a566c975931)\": rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.9\": failed to pull image \"registry.k8s.io/pause:3.9\": failed to pull and unpack image \"registry.k8s.io/pause:3.9\": failed to resolve reference \"registry.k8s.io/pause:3.9\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\\\": dial tcp 142.251.170.82:443: i/o timeout"" pod="kube-system/etcd-kube-woker-node-1" podUID="063f61239034c55105879a566c975931"
May 07 14:01:59 kube-woker-node-1 kubelet[2149]: W0507 14:01:59.601118 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.2.233:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkube-woker-node-1&limit=500&resourceVersion=0": dial tcp 192.168.2.233:6443: connect: connection refused
May 07 14:01:59 kube-woker-node-1 kubelet[2149]: E0507 14:01:59.601659 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.2.233:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkube-woker-node-1&limit=500&resourceVersion=0": dial tcp 192.168.2.233:6443: connect: connection refused
May 07 14:02:01 kube-woker-node-1 kubelet[2149]: E0507 14:02:01.657185 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get "https://192.168.2.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kube-woker-node-1?timeout=10s\": dial tcp 192.168.2.233:6443: connect: connection refused" interval="7s"
May 07 14:02:02 kube-woker-node-1 kubelet[2149]: I0507 14:02:02.228855 2149 kubelet_node_status.go:73] "Attempting to register node" node="kube-woker-node-1"
May 07 14:02:02 kube-woker-node-1 kubelet[2149]: E0507 14:02:02.230956 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post "https://192.168.2.233:6443/api/v1/nodes\": dial tcp 192.168.2.233:6443: connect: connection refused" node="kube-woker-node-1"
May 07 14:02:03 kube-woker-node-1 kubelet[2149]: E0507 14:02:03.995669 2149 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node "kube-woker-node-1" not found"
May 07 14:02:04 kube-woker-node-1 kubelet[2149]: E0507 14:02:04.520008 2149 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image "registry.k8s.io/pause:3.9": failed to pull image "registry.k8s.io/pause:3.9": failed to pull and unpack image "registry.k8s.io/pause:3.9": failed to resolve reference "registry.k8s.io/pause:3.9": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\": dial tcp 142.251.170.82:443: i/o timeout"
May 07 14:02:04 kube-woker-node-1 kubelet[2149]: E0507 14:02:04.520396 2149 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image "registry.k8s.io/pause:3.9": failed to pull image "registry.k8s.io/pause:3.9": failed to pull and unpack image "registry.k8s.io/pause:3.9": failed to resolve reference "registry.k8s.io/pause:3.9": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\": dial tcp 142.251.170.82:443: i/o timeout" pod="kube-system/kube-apiserver-kube-woker-node-1"
May 07 14:02:04 kube-woker-node-1 kubelet[2149]: E0507 14:02:04.520649 2149 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image "registry.k8s.io/pause:3.9": failed to pull image "registry.k8s.io/pause:3.9": failed to pull and unpack image "registry.k8s.io/pause:3.9": failed to resolve reference "registry.k8s.io/pause:3.9": failed to do request: Head "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\": dial tcp 142.251.170.82:443: i/o timeout" pod="kube-system/kube-apiserver-kube-woker-node-1"
May 07 14:02:04 kube-woker-node-1 kubelet[2149]: E0507 14:02:04.521093 2149 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "kube-apiserver-kube-woker-node-1_kube-system(cede7e8c1195e5e0a32715acd7fe0cfd)" with CreatePodSandboxError: "Failed to create sandbox for pod \"kube-apiserver-kube-woker-node-1_kube-system(cede7e8c1195e5e0a32715acd7fe0cfd)\": rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.9\": failed to pull image \"registry.k8s.io/pause:3.9\": failed to pull and unpack image \"registry.k8s.io/pause:3.9\": failed to resolve reference \"registry.k8s.io/pause:3.9\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\\\": dial tcp 142.251.170.82:443: i/o timeout"" pod="kube-system/kube-apiserver-kube-woker-node-1" podUID="cede7e8c1195e5e0a32715acd7fe0cfd"
May 07 14:02:08 kube-woker-node-1 kubelet[2149]: E0507 14:02:08.600378 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post "https://192.168.2.233:6443/api/v1/namespaces/default/events\": dial tcp 192.168.2.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-woker-node-1.17cd39512da5ff14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kube-woker-node-1,UID:kube-woker-node-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node kube-woker-node-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:kube-woker-node-1,},FirstTimestamp:2024-05-07 13:54:13.919014676 +0000 UTC m=+2.761196637,LastTimestamp:2024-05-07 13:54:13.919014676 +0000 UTC m=+2.761196637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kube-woker-node-1,}"
May 07 14:02:08 kube-woker-node-1 kubelet[2149]: E0507 14:02:08.660976 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get "https://192.168.2.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kube-woker-node-1?timeout=10s\": dial tcp 192.168.2.233:6443: connect: connection refused" interval="7s"
May 07 14:02:09 kube-woker-node-1 kubelet[2149]: I0507 14:02:09.237470 2149 kubelet_node_status.go:73] "Attempting to register node" node="kube-woker-node-1"
May 07 14:02:09 kube-woker-node-1 kubelet[2149]: E0507 14:02:09.239492 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post "https://192.168.2.233:6443/api/v1/nodes\": dial tcp 192.168.2.233:6443: connect: connection refused" node="kube-woker-node-1"
May 07 14:02:13 kube-woker-node-1 kubelet[2149]: E0507 14:02:13.298454 2149 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://192.168.2.233:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.2.233:6443: connect: connection refused
May 07 14:02:13 kube-woker-node-1 kubelet[2149]: E0507 14:02:13.996777 2149 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node "kube-woker-node-1" not found"
May 07 14:02:15 kube-woker-node-1 kubelet[2149]: E0507 14:02:15.664142 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get "https://192.168.2.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kube-woker-node-1?timeout=10s\": dial tcp 192.168.2.233:6443: connect: connection refused" interval="7s"
May 07 14:02:15 kube-woker-node-1 kubelet[2149]: W0507 14:02:15.933569 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.2.233:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.2.233:6443: connect: connection refused
May 07 14:02:15 kube-woker-node-1 kubelet[2149]: E0507 14:02:15.934293 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.2.233:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.2.233:6443: connect: connection refused
May 07 14:02:16 kube-woker-node-1 kubelet[2149]: I0507 14:02:16.252255 2149 kubelet_node_status.go:73] "Attempting to register node" node="kube-woker-node-1"
May 07 14:02:16 kube-woker-node-1 kubelet[2149]: E0507 14:02:16.256016 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post "https://192.168.2.233:6443/api/v1/nodes\": dial tcp 192.168.2.233:6443: connect: connection refused" node="kube-woker-node-1"
May 07 14:02:18 kube-woker-node-1 kubelet[2149]: E0507 14:02:18.601712 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post "https://192.168.2.233:6443/api/v1/namespaces/default/events\": dial tcp 192.168.2.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-woker-node-1.17cd39512da5ff14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kube-woker-node-1,UID:kube-woker-node-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node kube-woker-node-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:kube-woker-node-1,},FirstTimestamp:2024-05-07 13:54:13.919014676 +0000 UTC m=+2.761196637,LastTimestamp:2024-05-07 13:54:13.919014676 +0000 UTC m=+2.761196637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kube-woker-node-1,}"
May 07 14:02:22 kube-woker-node-1 kubelet[2149]: E0507 14:02:22.667497 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get "https://192.168.2.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kube-woker-node-1?timeout=10s\": dial tcp 192.168.2.233:6443: connect: connection refused" interval="7s"
May 07 14:02:23 kube-woker-node-1 kubelet[2149]: I0507 14:02:23.263908 2149 kubelet_node_status.go:73] "Attempting to register node" node="kube-woker-node-1"
May 07 14:02:23 kube-woker-node-1 kubelet[2149]: E0507 14:02:23.265866 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post "https://192.168.2.233:6443/api/v1/nodes\": dial tcp 192.168.2.233:6443: connect: connection refused" node="kube-woker-node-1"
May 07 14:02:23 kube-woker-node-1 kubelet[2149]: E0507 14:02:23.998309 2149 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node "kube-woker-node-1" not found"
May 07 14:02:28 kube-woker-node-1 kubelet[2149]: E0507 14:02:28.603289 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post "https://192.168.2.233:6443/api/v1/namespaces/default/events\": dial tcp 192.168.2.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-woker-node-1.17cd39512da5ff14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kube-woker-node-1,UID:kube-woker-node-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node kube-woker-node-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:kube-woker-node-1,},FirstTimestamp:2024-05-07 13:54:13.919014676 +0000 UTC m=+2.761196637,LastTimestamp:2024-05-07 13:54:13.919014676 +0000 UTC m=+2.761196637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kube-woker-node-1,}"
May 07 14:02:29 kube-woker-node-1 kubelet[2149]: E0507 14:02:29.670837 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get "https://192.168.2.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kube-woker-node-1?timeout=10s\": dial tcp 192.168.2.233:6443: connect: connection refused" interval="7s"
May 07 14:02:30 kube-woker-node-1 kubelet[2149]: I0507 14:02:30.273057 2149 kubelet_node_status.go:73] "Attempting to register node" node="kube-woker-node-1"
May 07 14:02:30 kube-woker-node-1 kubelet[2149]: E0507 14:02:30.275134 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post "https://192.168.2.233:6443/api/v1/nodes\": dial tcp 192.168.2.233:6443: connect: connection refused" node="kube-woker-node-1"
May 07 14:02:30 kube-woker-node-1 kubelet[2149]: W0507 14:02:30.722861 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://192.168.2.233:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.2.233:6443: connect: connection refused
May 07 14:02:30 kube-woker-node-1 kubelet[2149]: E0507 14:02:30.723388 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://192.168.2.233:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.2.233:6443: connect: connection refused
May 07 14:02:33 kube-woker-node-1 kubelet[2149]: W0507 14:02:33.595251 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.2.233:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.2.233:6443: connect: connection refused
May 07 14:02:33 kube-woker-node-1 kubelet[2149]: E0507 14:02:33.595786 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.2.233:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.2.233:6443: connect: connection refused
May 07 14:02:33 kube-woker-node-1 kubelet[2149]: E0507 14:02:33.998692 2149 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node "kube-woker-node-1" not found"
May 07 14:02:36 kube-woker-node-1 kubelet[2149]: E0507 14:02:36.673195 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get "https://192.168.2.233:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kube-woker-node-1?timeout=10s\": dial tcp 192.168.2.233:6443: connect: connection refused" interval="7s"
May 07 14:02:37 kube-woker-node-1 kubelet[2149]: I0507 14:02:37.281806 2149 kubelet_node_status.go:73] "Attempting to register node" node="kube-woker-node-1"
May 07 14:02:37 kube-woker-node-1 kubelet[2149]: E0507 14:02:37.283970 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post "https://192.168.2.233:6443/api/v1/nodes\": dial tcp 192.168.2.233:6443: connect: connection refused" node="kube-woker-node-1"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

3 participants