-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubelet Add configuration and restart,node status becomes NotReady #85835
Comments
kubectl get csr |
/sig node These SIGs are my best guesses for this issue. Please comment 🤖 I am a bot run by vllry. 👩🔬 |
@yq513 are you intentionally avoiding using the default directory |
health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused |
Yes, this kind of refused connection will appear after restarting kubelet. I do n’t know if it is a bug or where to pay attention. |
This is not the point, the point is to restart kubelet, etcd will show this error prompt |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I added --volume-plugin-dir = / var / lib / kubelet / volume-plugins in kubelet.conf
Restart kubelet
kubelet status changed
[root @ dk-node01 ssl] # systemctl status kubelet
● kubelet.service-Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since February 2019-12-03 09:52:56 CST; 3s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 17449 (kubelet)
Tasks: 13
Memory: 19.3M
CGroup: /system.slice/kubelet.service
└─17449 / opt / kubernetes / bin / kubelet
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.847868 17449 docker_service.go: 260] Docker Info: & {ID: MOVL: Y2XF: QYR5: BLBC: 2525: WU ... rue] [Na
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.847998 17449 docker_service.go: 273] Setting cgroupDriver to cgroupfs
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872495 17449 remote_runtime.go: 59] parsed scheme: ""
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872519 17449 remote_runtime.go: 59] scheme "" not registered, fallback to default scheme
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872561 17449 passthrough.go: 48] ccResolverWrapper: sending update to cc: {[{/ var / ...] }
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872594 17449 clientconn.go: 577] ClientConn switching balancer to "pick_first"
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872647 17449 remote_image.go: 50] parsed scheme: ""
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872662 17449 remote_image.go: 50] scheme "" not registered, fallback to default scheme
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872681 17449 passthrough.go: 48] ccResolverWrapper: sending update to cc: {[{/ var / ...] }
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.873057 17449 clientconn.go: 577] ClientConn switching balancer to "pick_first"
etcd cluster changes
[root @ k8s-m ~] # systemctl status etcd
● etcd.service-Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since February 2019-12-03 09:17:14 CST; 32min ago
Main PID: 1402 (etcd)
Tasks: 15
Memory: 257.1M
CGroup: /system.slice/etcd.service
└─1402 / opt / kubernetes / bin / etcd
Dec 03 09:49:28 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:28 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:33 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:33 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:38 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:38 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:43 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:43 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:48 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:48 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
The text was updated successfully, but these errors were encountered: