Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubelet Add configuration and restart,node status becomes NotReady #85835

Closed
yq513 opened this issue Dec 3, 2019 · 10 comments
Closed

kubelet Add configuration and restart,node status becomes NotReady #85835

yq513 opened this issue Dec 3, 2019 · 10 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@yq513
Copy link

yq513 commented Dec 3, 2019

I added --volume-plugin-dir = / var / lib / kubelet / volume-plugins in kubelet.conf
Restart kubelet
kubelet status changed
[root @ dk-node01 ssl] # systemctl status kubelet
● kubelet.service-Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since February 2019-12-03 09:52:56 CST; 3s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 17449 (kubelet)
    Tasks: 13
   Memory: 19.3M
   CGroup: /system.slice/kubelet.service
           └─17449 / opt / kubernetes / bin / kubelet

Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.847868 17449 docker_service.go: 260] Docker Info: & {ID: MOVL: Y2XF: QYR5: BLBC: 2525: WU ... rue] [Na
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.847998 17449 docker_service.go: 273] Setting cgroupDriver to cgroupfs
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872495 17449 remote_runtime.go: 59] parsed scheme: ""
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872519 17449 remote_runtime.go: 59] scheme "" not registered, fallback to default scheme
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872561 17449 passthrough.go: 48] ccResolverWrapper: sending update to cc: {[{/ var / ...] }
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872594 17449 clientconn.go: 577] ClientConn switching balancer to "pick_first"
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872647 17449 remote_image.go: 50] parsed scheme: ""
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872662 17449 remote_image.go: 50] scheme "" not registered, fallback to default scheme
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.872681 17449 passthrough.go: 48] ccResolverWrapper: sending update to cc: {[{/ var / ...] }
Dec 03 09:52:56 dk-node01 kubelet [17449]: I1203 09: 52: 56.873057 17449 clientconn.go: 577] ClientConn switching balancer to "pick_first"
etcd cluster changes
[root @ k8s-m ~] # systemctl status etcd
● etcd.service-Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since February 2019-12-03 09:17:14 CST; 32min ago
 Main PID: 1402 (etcd)
    Tasks: 15
   Memory: 257.1M
   CGroup: /system.slice/etcd.service
           └─1402 / opt / kubernetes / bin / etcd

Dec 03 09:49:28 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:28 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:33 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:33 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:38 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:38 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:43 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:43 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:48 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused
Dec 03 09:49:48 k8s-m etcd [1402]: health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused

@yq513 yq513 added the kind/bug Categorizes issue or PR as related to a bug. label Dec 3, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Dec 3, 2019
@yq513
Copy link
Author

yq513 commented Dec 3, 2019

kubectl get csr
Did not see the request

@athenabot
Copy link

/sig node

These SIGs are my best guesses for this issue. Please comment /remove-sig <name> if I am incorrect about one.

🤖 I am a bot run by vllry. 👩‍🔬

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 3, 2019
@boluisa
Copy link
Contributor

boluisa commented Dec 3, 2019

@yq513 are you intentionally avoiding using the default directory"/usr/libexec/kubernetes/kubelet-plugins/volume/exec"? if yes, try increasing the verbosity of the kubelet and please share the output.

@zouyee
Copy link
Member

zouyee commented Dec 3, 2019

health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused

@yq513
Copy link
Author

yq513 commented Dec 3, 2019

health check for peer 41d5efdbfb2297b2 could not connect: dial tcp 192.168.21.118:2380: connect: connection refused

Yes, this kind of refused connection will appear after restarting kubelet. I do n’t know if it is a bug or where to pay attention.

@yq513
Copy link
Author

yq513 commented Dec 3, 2019

@yq513 are you intentionally avoiding using the default directory"/usr/libexec/kubernetes/kubelet-plugins/volume/exec"? if yes, try increasing the verbosity of the kubelet and please share the output.

This is not the point, the point is to restart kubelet, etcd will show this error prompt

@liggitt liggitt added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Dec 3, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 2, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 1, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

7 participants