Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.7.0-beta.2 kubelet does not restart containers when /etc/kubernetes/manifests change #48219

Closed
asac opened this issue Jun 28, 2017 · 44 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@asac
Copy link

asac commented Jun 28, 2017

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
Maybe a feature, most likely a dupe (found a few similar reading issues, some were closed), but since i am on RC i better mention it...

I upgrade from 1.7.0 beta.2 to rc.1 on my arm masternode by:

  • editing /etc/kubernetes/manifests/kube-apiserver.yaml
  • observing apiserver disappearing from process list
  • observing apiserver never coming back;
  • i then manually pull the docker images from manifest and restart kubelet to solve this

What you expected to happen:

  • kubelet should ensure whatever is in /etc/kubernets/manifests is properly running and restart as needed without manual interference on upgrade.

How to reproduce it (as minimally and precisely as possible):

  • edit manifest

Environment:

  • Kubernetes version (use kubectl version):

kubectl version

Client Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.0-beta.2", GitCommit:"ceab7f7a6753c20d3be75463b17402fdcea856ba", GitTreeState:"clean", BuildDate:"2017-06-15T17:12:53Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.0-rc.1", GitCommit:"6b9ded1649cfb512d4e88570c738aca9f8265639", GitTreeState:"clean", BuildDate:"2017-06-24T05:30:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"}

  • Cloud provider or hardware configuration**:

self-hosted / scaleway

  • OS (e.g. from /etc/os-release):

cat /etc/os-release

NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

  • Kernel (e.g. uname -a):

uname -a
Linux 8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com 4.9.20-std-1 #1 SMP Wed Apr 5 15:38:34 UTC 2017 armv7l armv7l armv7l GNU/Linux

  • Install tools:

kubeadm during alphas...

  • Others:
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 28, 2017
@k8s-github-robot
Copy link

@asac There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
e.g., @kubernetes/sig-api-machinery-* for API Machinery
(2) specifying the label manually: /sig <label>
e.g., /sig scalability for sig/scalability

Note: method (1) will trigger a notification to the team. You can find the team list here and label list here

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 28, 2017
@xiangpengzhao
Copy link
Contributor

/sig node

@k8s-ci-robot k8s-ci-robot added the sig/node Categorizes an issue or PR as relevant to SIG Node. label Jun 28, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 28, 2017
@luxas
Copy link
Member

luxas commented Jun 29, 2017

cc @kubernetes/sig-node-bugs PTAL

@asac Did you have the rc.1 image prepulled? Otherwise kubelet probably started pulling the image, which can take some time.

@asac
Copy link
Author

asac commented Jun 29, 2017 via email

@luxas
Copy link
Member

luxas commented Jun 29, 2017

note that i am editing with vi so it creates the .swp files...

That might also be a problem...

@xiangpengzhao
Copy link
Contributor

I remembered that .swp issue had been fixed.

@xiangpengzhao
Copy link
Contributor

filename starts with dot issue had been fixed in #45111. xref: #44450 #40331 #40452

@xiangpengzhao
Copy link
Contributor

Are there any related logs of kubelet?

@yujuhong
Copy link
Contributor

I can't reproduce this. @asac could you post the relevant kubelet log?

@asac
Copy link
Author

asac commented Jun 29, 2017 via email

@asac
Copy link
Author

asac commented Jun 30, 2017

ok i tailed syslog right before i edited manifest to change apiserver from rc.1 to final 1.7.0...

again apiserver didnt come back.... (i pulled the docker image before). here the syslog part....

Jun 30 14:31:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:31:59.860279   22024 file_linux.go:114] can't process config file "/etc/kubernetes/manifests/4913": open /etc/kubernetes/manifests/4913: no such file or directory
Jun 30 14:31:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:31:59.984021   22024 kubelet.go:1596] Deleting mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(ea72a477-5c3e-11e7-8a82-0007cb03319c)" because it is outdated
Jun 30 14:32:00 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:00.049981   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: read tcp 10.1.0.105:35468->51.15.141.192:6443: read: connection reset by peer
Jun 30 14:32:00 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:00.052210   22024 status_manager.go:431] Failed to get status for pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8d61b24b1dfea098cbc44622f505dc2d)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: read tcp 10.1.0.105:35468->51.15.141.192:6443: read: connection reset by peer
Jun 30 14:32:00 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:00.069375   22024 kubelet.go:1607] Failed creating a mirror pod for "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(1404487b47bc19033239475477d91887)": Post https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:00 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:00.075015   22024 reflector.go:304] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to watch *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=3727298&timeoutSeconds=303&watch=true: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:00 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:00.105121   22024 reflector.go:323] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: watch of *v1.Node ended with: very short watch
Jun 30 14:32:00 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:00.112270   22024 reflector.go:304] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=3621644&timeoutSeconds=457&watch=true: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:00 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:00.113274   22024 status_manager.go:431] Failed to get status for pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(1404487b47bc19033239475477d91887)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:00 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:00.737607   22024 status_manager.go:431] Failed to get status for pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8d61b24b1dfea098cbc44622f505dc2d)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:00 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:00.796482   22024 event.go:209] Unable to write event: 'Post https://51.15.141.192:6443/api/v1/namespaces/kube-system/events: dial tcp 51.15.141.192:6443: getsockopt: connection refused' (may retry after sleeping)
Jun 30 14:32:00 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:00.827278   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:01 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:01.079092   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:01 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:01.114007   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:01 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:01.129777   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:01 8bc81f5a-d92a-428f-b71c-0f31b3ce958f cron[4364]: (root) RELOAD (crontabs/root)
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:02.034580   22024 reconciler.go:186] operationExecutor.UnmountVolume started for volume "certs" (UniqueName: "kubernetes.io/host-path/8d61b24b1dfea098cbc44622f505dc2d-certs") pod "8d61b24b1dfea098cbc44622f505dc2d" (UID: "8d61b24b1dfea098cbc44622f505dc2d")
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:02.034870   22024 reconciler.go:186] operationExecutor.UnmountVolume started for volume "k8s" (UniqueName: "kubernetes.io/host-path/8d61b24b1dfea098cbc44622f505dc2d-k8s") pod "8d61b24b1dfea098cbc44622f505dc2d" (UID: "8d61b24b1dfea098cbc44622f505dc2d")
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:02.034951   22024 operation_generator.go:523] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d61b24b1dfea098cbc44622f505dc2d-certs" (OuterVolumeSpecName: "certs") pod "8d61b24b1dfea098cbc44622f505dc2d" (UID: "8d61b24b1dfea098cbc44622f505dc2d"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:02.035167   22024 operation_generator.go:523] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d61b24b1dfea098cbc44622f505dc2d-k8s" (OuterVolumeSpecName: "k8s") pod "8d61b24b1dfea098cbc44622f505dc2d" (UID: "8d61b24b1dfea098cbc44622f505dc2d"). InnerVolumeSpecName "k8s". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:02.035213   22024 reconciler.go:186] operationExecutor.UnmountVolume started for volume "pki" (UniqueName: "kubernetes.io/host-path/8d61b24b1dfea098cbc44622f505dc2d-pki") pod "8d61b24b1dfea098cbc44622f505dc2d" (UID: "8d61b24b1dfea098cbc44622f505dc2d")
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:02.035316   22024 operation_generator.go:523] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d61b24b1dfea098cbc44622f505dc2d-pki" (OuterVolumeSpecName: "pki") pod "8d61b24b1dfea098cbc44622f505dc2d" (UID: "8d61b24b1dfea098cbc44622f505dc2d"). InnerVolumeSpecName "pki". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:02.035624   22024 reconciler.go:290] Volume detached for volume "certs" (UniqueName: "kubernetes.io/host-path/8d61b24b1dfea098cbc44622f505dc2d-certs") on node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com" DevicePath ""
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:02.036821   22024 reconciler.go:290] Volume detached for volume "k8s" (UniqueName: "kubernetes.io/host-path/8d61b24b1dfea098cbc44622f505dc2d-k8s") on node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com" DevicePath ""
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:02.082900   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:02.127350   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:02.133300   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:02.137770   22024 reconciler.go:290] Volume detached for volume "pki" (UniqueName: "kubernetes.io/host-path/8d61b24b1dfea098cbc44622f505dc2d-pki") on node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com" DevicePath ""
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:02.231636   22024 pod_container_deletor.go:77] Container "467280ec894f9c0eb907a32b01cde16996eac6c60d0ae67c0518aeb1b2db3112" not found in pod's containers
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:02.409391   22024 event.go:209] Unable to write event: 'Post https://51.15.141.192:6443/api/v1/namespaces/kube-system/events: dial tcp 51.15.141.192:6443: getsockopt: connection refused' (may retry after sleeping)
Jun 30 14:32:02 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:02.813257   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:03 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:03.086627   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:03 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:03.135028   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:03 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:03.141915   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:03 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:03.557696   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:03 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:03.561509   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:03 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:03.564344   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:03 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:03.567445   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:03 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:03.571679   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:03 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:03.571795   22024 kubelet_node_status.go:349] Unable to update node status: update node status exceeds retry count
Jun 30 14:32:04 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:04.093954   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:04 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:04.143897   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:04 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:04.147650   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:04 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:04.776132   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:05 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:05.098952   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:05 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:05.153851   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:05 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:05.154014   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:06 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:06.104531   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:06 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:06.157607   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:06 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:06.158742   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:06 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:06.821076   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:07 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:07.107941   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:07 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:07.162100   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:07 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:07.167742   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:08 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:08.110977   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:08 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:08.168934   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:08 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:08.171237   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:08 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:08.819747   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:08 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:08.825827   22024 status_manager.go:431] Failed to get status for pod "kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(27c7504ae7363c64fa21e35576bf2e40)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:09 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:09.115325   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:09 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:09.128677   22024 kuberuntime_manager.go:457] Container {Name:kube-controller-manager Image:gcr.io/google_containers/kube-controller-manager-arm:v1.7.0-rc.1 Command:[kube-controller-manager --service-account-private-key-file=/etc/kubernetes/pki/sa.key --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --address=127.0.0.1 --leader-elect=true --use-service-account-credentials=true --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --insecure-experimental-approve-all-kubelet-csrs-for-group=system:bootstrappers --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --root-ca-file=/etc/kubernetes/pki/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI}]} VolumeMounts:[{Name:k8s ReadOnly:true MountPath:/etc/kubernetes SubPath:} {Name:certs ReadOnly:false MountPath:/etc/ssl/certs SubPath:} {Name:pki ReadOnly:false MountPath:/etc/pki SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 30 14:32:09 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:09.132131   22024 kuberuntime_manager.go:741] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(27c7504ae7363c64fa21e35576bf2e40)"
Jun 30 14:32:09 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:09.152863   22024 kubelet_pods.go:264] hostname for pod:"kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com" was longer than 63. Truncated hostname to :"kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pu"
Jun 30 14:32:09 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:09.177087   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:09 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:09.178710   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:09 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kernel: [232844.981573] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null)
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:10.120673   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f rsyslogd-2007: action 'action 10' suspended, next retry is Fri Jun 30 14:33:40 2017 [v8.16.0 try http://www.rsyslog.com/e/2007 ]
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:10.188028   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:10.188232   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:10.424094   22024 status_manager.go:431] Failed to get status for pod "kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:10.720766   22024 kuberuntime_manager.go:457] Container {Name:kube-scheduler Image:gcr.io/google_containers/kube-scheduler-arm:v1.7.0-rc.1 Command:[kube-scheduler --address=127.0.0.1 --leader-elect=true --kubeconfig=/etc/kubernetes/scheduler.conf] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:k8s ReadOnly:true MountPath:/etc/kubernetes SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10251,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: I0630 14:32:10.721299   22024 kuberuntime_manager.go:741] checking backoff for container "kube-scheduler" in pod "kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92)"
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:10.725091   22024 status_manager.go:431] Failed to get status for pod "kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kernel: [232845.845019] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null)
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:10.729385   22024 status_manager.go:431] Failed to get status for pod "kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(27c7504ae7363c64fa21e35576bf2e40)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:10.737698   22024 kubelet_pods.go:264] hostname for pod:"kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com" was longer than 63. Truncated hostname to :"kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.s"
Jun 30 14:32:10 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:10.862668   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:11 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:11.124601   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:11 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:11.191159   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:11 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:11.196869   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:11 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kernel: [232846.599989] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null)
Jun 30 14:32:11 8bc81f5a-d92a-428f-b71c-0f31b3ce958f systemd[1]: dev-disk-by\x2duuid-d59357ca\x2db645\x2d4932\x2db4ad\x2dd099916220b6.device: Dev dev-disk-by\x2duuid-d59357ca\x2db645\x2d4932\x2db4ad\x2dd099916220b6.device appeared twice with different sysfs paths /sys/devices/virtual/block/dm-3 and /sys/devices/virtual/block/dm-8
Jun 30 14:32:11 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kernel: [232846.946648] EXT4-fs (dm-8): mounted filesystem with ordered data mode. Opts: (null)
Jun 30 14:32:12 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:12.127804   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:12 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:12.195288   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:12 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:12.201436   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:12 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:12.416342   22024 event.go:209] Unable to write event: 'Post https://51.15.141.192:6443/api/v1/namespaces/kube-system/events: dial tcp 51.15.141.192:6443: getsockopt: connection refused' (may retry after sleeping)
Jun 30 14:32:12 8bc81f5a-d92a-428f-b71c-0f31b3ce958f systemd[1]: dev-disk-by\x2duuid-d59357ca\x2db645\x2d4932\x2db4ad\x2dd099916220b6.device: Dev dev-disk-by\x2duuid-d59357ca\x2db645\x2d4932\x2db4ad\x2dd099916220b6.device appeared twice with different sysfs paths /sys/devices/virtual/block/dm-3 and /sys/devices/virtual/block/dm-8
Jun 30 14:32:12 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kernel: [232847.837912] EXT4-fs (dm-8): mounted filesystem with ordered data mode. Opts: (null)
Jun 30 14:32:12 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:12.808958   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:13.131613   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:13.178330   22024 status_manager.go:431] Failed to get status for pod "kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(27c7504ae7363c64fa21e35576bf2e40)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:13.199170   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:13.205425   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f systemd[1]: dev-disk-by\x2duuid-d59357ca\x2db645\x2d4932\x2db4ad\x2dd099916220b6.device: Dev dev-disk-by\x2duuid-d59357ca\x2db645\x2d4932\x2db4ad\x2dd099916220b6.device appeared twice with different sysfs paths /sys/devices/virtual/block/dm-3 and /sys/devices/virtual/block/dm-8
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kernel: [232848.514723] EXT4-fs (dm-8): mounted filesystem with ordered data mode. Opts: (null)
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:13.575486   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:13.577989   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:13.580462   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:13.582727   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:13.585778   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:13 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:13.585942   22024 kubelet_node_status.go:349] Unable to update node status: update node status exceeds retry count
Jun 30 14:32:14 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:14.136601   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:14 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:14.205392   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:14 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:14.211201   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:14 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:14.353835   22024 status_manager.go:431] Failed to get status for pod "kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:14 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:14.880223   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:15 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:15.143071   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:15 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:15.214974   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:15 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:15.215689   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:16 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:16.147603   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:16 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:16.219561   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:16 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:16.220272   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:16 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:16.784380   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:17 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:17.151182   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:17 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:17.224266   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:17 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:17.229892   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:18 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:18.154430   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:18 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:18.228495   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:18 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:18.233642   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:18 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:18.826086   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:19 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:19.158907   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:19 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:19.231775   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:19 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:19.238140   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:20 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:20.162252   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:20 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:20.235508   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:20 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:20.242341   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:20 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:20.723781   22024 status_manager.go:431] Failed to get status for pod "kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:20 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: W0630 14:32:20.726488   22024 status_manager.go:431] Failed to get status for pod "kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(27c7504ae7363c64fa21e35576bf2e40)": Get https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:20 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:20.775107   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:21 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:21.166998   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:21 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:21.238371   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:21 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:21.245440   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:22 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:22.170763   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:22 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:22.242445   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:22 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:22.249804   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:22 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:22.421132   22024 event.go:209] Unable to write event: 'Post https://51.15.141.192:6443/api/v1/namespaces/kube-system/events: dial tcp 51.15.141.192:6443: getsockopt: connection refused' (may retry after sleeping)
Jun 30 14:32:22 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:22.817168   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:23 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:23.173727   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:23 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:23.245234   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:23 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:23.252441   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:23 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:23.590564   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:23 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:23.594824   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:23 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:23.597190   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:23 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:23.599200   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:23 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:23.601556   22024 kubelet_node_status.go:357] Error updating node status, will retry: error getting node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com": Get https://51.15.141.192:6443/api/v1/nodes/8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:23 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:23.601677   22024 kubelet_node_status.go:349] Unable to update node status: update node status exceeds retry count
Jun 30 14:32:24 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:24.177181   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:24 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:24.249268   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:24 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:24.260798   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:24 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:24.828268   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:25 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:25.182167   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:25 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:25.254183   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:25 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:25.267678   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:26 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:26.187626   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:26 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:26.257357   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:26 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:26.273605   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:26 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:26.813504   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:27 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:27.192597   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:27 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:27.260554   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:27 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:27.278077   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:28 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:28.196274   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:28 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:28.266256   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:28 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:28.282025   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:28 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:28.809333   22024 mirror_client.go:88] Failed deleting a mirror pod "kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system": Delete https://51.15.141.192:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:29 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:29.204185   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:29 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:29.272105   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://51.15.141.192:6443/api/v1/pods?fieldSelector=spec.nodeName%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:29 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:29.292232   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node: Get https://51.15.141.192:6443/api/v1/nodes?fieldSelector=metadata.name%3D8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com&resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:30 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:30.209662   22024 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400: Failed to list *v1.Service: Get https://51.15.141.192:6443/api/v1/services?resourceVersion=0: dial tcp 51.15.141.192:6443: getsockopt: connection refused
Jun 30 14:32:30 8bc81f5a-d92a-428f-b71c-0f31b3ce958f dockerd[6312]: time="2017-06-30T14:32:30.229070201Z" level=error msg="Handler for POST /v1.27/containers/467280ec894f9c0eb907a32b01cde16996eac6c60d0ae67c0518aeb1b2db3112/stop returned error: Container 467280ec894f9c0eb907a32b01cde16996eac6c60d0ae67c0518aeb1b2db3112 is already stopped"
Jun 30 14:32:30 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:32:30.2

@asac
Copy link
Author

asac commented Jun 30, 2017

attaching long log that covers 4 minutes in case you want more....
longlog.txt

@asac
Copy link
Author

asac commented Jun 30, 2017

then stopping kubelet and starting it makes it start api server... attaching a log of that start. while the apiserver is not fully up at the time i stop the log, it is already running...

restartlog.txt

@asac
Copy link
Author

asac commented Jun 30, 2017

here the last few lines of that log when the kubelet finally stops complaining about api server not reachable anymore...
restartlog-end.txt

@xiangpengzhao
Copy link
Contributor

Notice that there is a line in the log:

Jun 30 14:31:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[22024]: E0630 14:31:59.860279   22024 file_linux.go:114] can't process config file "/etc/kubernetes/manifests/4913": open /etc/kubernetes/manifests/4913: no such file or directory

What's the 4913 manifest file?

@asac
Copy link
Author

asac commented Jun 30, 2017 via email

@xiangpengzhao
Copy link
Contributor

Yeah, seems like. That's odd.

@yujuhong any ideas?

@yujuhong
Copy link
Contributor

@yujuhong any ideas?

Nope.

Some suggestions to help diagnose the issue:

  1. Try to reproduce this again, and check your /etc/kubernetes/manifests/ directory to see what files are present.
  2. Try avoiding editing in place to see if that resolves the problem. Copy the manifest file to some other place, modify it, and copy it back to the /etc/kubernetes/manifests.

BTW, it'd be better to include the logs prior to the "can't process config file" line.

@asac
Copy link
Author

asac commented Jul 2, 2017

Another log from just the event we care about:

Managed to reproduce with just opening the kube-scheduler manifest and without change saving it in vim. Log is (and kubelet gives up - not restarting scheduler until i restart kubelet).

Also I could validate that if i just copy the edited yaml file over to manifests, kubelet indeed restarts the service correctly.

So seems we are most likely really looking at a bug about "random files appearing temporarly in manifests directory" confuse kubelet in a way that it looses its ability to start downed services somehow...

Jul  2 13:41:57 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: E0702 13:41:57.182019    7480 file_linux.go:114] can't process config file "/etc/kubernetes/manifests/4913": open /etc/kubernetes/manifests/4913: no such file or directory
Jul  2 13:41:57 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: E0702 13:41:57.241569    7480 prober_manager.go:154] Liveness probe already exists! kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92) - kube-scheduler
Jul  2 13:41:58 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: W0702 13:41:58.848523    7480 pod_container_deletor.go:77] Container "ef3162081ab1be2972da5f031ca0301c1bcde4a786b26e32cfe8a3568fab5068" not found in pod's containers
Jul  2 13:41:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: E0702 13:41:59.190729    7480 kuberuntime_container.go:59] Can't make a ref to pod "kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92)", container kube-scheduler: selfLink was empty, can't make reference
Jul  2 13:41:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: I0702 13:41:59.460369    7480 reconciler.go:186] operationExecutor.UnmountVolume started for volume "k8s" (UniqueName: "kubernetes.io/host-path/8bbda1f99cae98feb888259002d1fa92-k8s") pod "8bbda1f99cae98feb888259002d1fa92" (UID: "8bbda1f99cae98feb888259002d1fa92")
Jul  2 13:41:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: I0702 13:41:59.460657    7480 operation_generator.go:523] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bbda1f99cae98feb888259002d1fa92-k8s" (OuterVolumeSpecName: "k8s") pod "8bbda1f99cae98feb888259002d1fa92" (UID: "8bbda1f99cae98feb888259002d1fa92"). InnerVolumeSpecName "k8s". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jul  2 13:41:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: I0702 13:41:59.562288    7480 reconciler.go:290] Volume detached for volume "k8s" (UniqueName: "kubernetes.io/host-path/8bbda1f99cae98feb888259002d1fa92-k8s") on node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com" DevicePath ""
Jul  2 13:42:05 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: E0702 13:42:05.296665    7480 kuberuntime_manager.go:843] PodSandboxStatus of sandbox "ef3162081ab1be2972da5f031ca0301c1bcde4a786b26e32cfe8a3568fab5068" for pod "kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92)" error: rpc error: code = 2 desc = Error: No such container: ef3162081ab1be2972da5f031ca0301c1bcde4a786b26e32cfe8a3568fab5068
Jul  2 13:42:05 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: E0702 13:42:05.296895    7480 generic.go:241] PLEG: Ignoring events for pod kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com/kube-system: rpc error: code = 2 desc = Error: No such container: ef3162081ab1be2972da5f031ca0301c1bcde4a786b26e32cfe8a3568fab5068
root@8bc81f5a-d92a-428f-b71c-0f31b3ce958f:~/devops/scripts/gluster# tail -f /var/log/syslog
Jul  2 13:41:27 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kernel: [402599.922070] EXT4-fs (dm-16): mounted filesystem with ordered data mode. Opts: (null)
Jul  2 13:41:57 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: E0702 13:41:57.182019    7480 file_linux.go:114] can't process config file "/etc/kubernetes/manifests/4913": open /etc/kubernetes/manifests/4913: no such file or directory
Jul  2 13:41:57 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: E0702 13:41:57.241569    7480 prober_manager.go:154] Liveness probe already exists! kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92) - kube-scheduler
Jul  2 13:41:58 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: W0702 13:41:58.848523    7480 pod_container_deletor.go:77] Container "ef3162081ab1be2972da5f031ca0301c1bcde4a786b26e32cfe8a3568fab5068" not found in pod's containers
Jul  2 13:41:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: E0702 13:41:59.190729    7480 kuberuntime_container.go:59] Can't make a ref to pod "kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92)", container kube-scheduler: selfLink was empty, can't make reference
Jul  2 13:41:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: I0702 13:41:59.460369    7480 reconciler.go:186] operationExecutor.UnmountVolume started for volume "k8s" (UniqueName: "kubernetes.io/host-path/8bbda1f99cae98feb888259002d1fa92-k8s") pod "8bbda1f99cae98feb888259002d1fa92" (UID: "8bbda1f99cae98feb888259002d1fa92")
Jul  2 13:41:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: I0702 13:41:59.460657    7480 operation_generator.go:523] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bbda1f99cae98feb888259002d1fa92-k8s" (OuterVolumeSpecName: "k8s") pod "8bbda1f99cae98feb888259002d1fa92" (UID: "8bbda1f99cae98feb888259002d1fa92"). InnerVolumeSpecName "k8s". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jul  2 13:41:59 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: I0702 13:41:59.562288    7480 reconciler.go:290] Volume detached for volume "k8s" (UniqueName: "kubernetes.io/host-path/8bbda1f99cae98feb888259002d1fa92-k8s") on node "8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com" DevicePath ""
Jul  2 13:42:05 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: E0702 13:42:05.296665    7480 kuberuntime_manager.go:843] PodSandboxStatus of sandbox "ef3162081ab1be2972da5f031ca0301c1bcde4a786b26e32cfe8a3568fab5068" for pod "kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com_kube-system(8bbda1f99cae98feb888259002d1fa92)" error: rpc error: code = 2 desc = Error: No such container: ef3162081ab1be2972da5f031ca0301c1bcde4a786b26e32cfe8a3568fab5068
Jul  2 13:42:05 8bc81f5a-d92a-428f-b71c-0f31b3ce958f kubelet[7480]: E0702 13:42:05.296895    7480 generic.go:241] PLEG: Ignoring events for pod kube-scheduler-8bc81f5a-d92a-428f-b71c-0f31b3ce958f.pub.cloud.scaleway.com/kube-system: rpc error: code = 2 desc = Error: No such container: ef3162081ab1be2972da5f031ca0301c1bcde4a786b26e32cfe8a3568fab5068

@asac
Copy link
Author

asac commented Jul 2, 2017

also note that the number "4913" seems to be stable, so its probably not a PID ....

I tried creating a random file called "1111" with binary garbage, whcih didnt cause problems restarting kube-scheduler on changes by a "cp" from a preedited place.

So far only way i can reliably reproduce is by using vim from xenials:
ii vim-runtime 2:7.4.1689-3ubuntu1 all Vi IMproved - Runtime files

I am root user when using vim..

I straced vim and got the following (suggesting that its vim creating this file)

strace: Process 12627 attached
gettimeofday({1499004150, 65149}, NULL) = 0
_newselect(1, [0], [], [0], NULL)       = 1 (in [0])
read(0, "\r", 4096)                     = 1
_newselect(1, [0], [], [0], {0, 0})     = 0 (Timeout)
write(1, "\r", 1)                       = 1
stat64("/etc/kubernetes/manifests/kube-scheduler.yaml", {st_mode=S_IFREG|0600, st_size=860, ...}) = 0
access("/etc/kubernetes/manifests/kube-scheduler.yaml", W_OK) = 0
_newselect(1, [0], [], [0], {0, 0})     = 0 (Timeout)
write(1, "\33[?25l\"/etc/kubernetes/manifests"..., 53) = 53
stat64("/etc/kubernetes/manifests/kube-scheduler.yaml", {st_mode=S_IFREG|0600, st_size=860, ...}) = 0
access("/etc/kubernetes/manifests/kube-scheduler.yaml", W_OK) = 0
getxattr("/etc/kubernetes/manifests/kube-scheduler.yaml", "system.posix_acl_access", 0xbeebe8b0, 132) = -1 ENODATA (No data available)
stat64("/etc/kubernetes/manifests/kube-scheduler.yaml", {st_mode=S_IFREG|0600, st_size=860, ...}) = 0
_newselect(1, [0], [], [0], {0, 0})     = 0 (Timeout)
lstat64("/etc/kubernetes/manifests/kube-scheduler.yaml", {st_mode=S_IFREG|0600, st_size=860, ...}) = 0
lstat64("/etc/kubernetes/manifests/4913", 0xbeebeb00) = -1 ENOENT (No such file or directory)
open("/etc/kubernetes/manifests/4913", O_WRONLY|O_CREAT|O_EXCL|O_LARGEFILE|O_NOFOLLOW, 0100600) = 3
fchown32(3, 0, 0)                       = 0
stat64("/etc/kubernetes/manifests/4913", {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
close(3)                                = 0
unlink("/etc/kubernetes/manifests/4913") = 0
stat64("/etc/kubernetes/manifests/kube-scheduler.yaml~", 0xbeebe958) = -1 ENOENT (No such file or directory)
stat64("/etc/kubernetes/manifests/kube-scheduler.yaml", {st_mode=S_IFREG|0600, st_size=860, ...}) = 0
stat64("/etc/kubernetes/manifests/kube-scheduler.yaml~", 0xbeebd938) = -1 ENOENT (No such file or directory)
unlink("/etc/kubernetes/manifests/kube-scheduler.yaml~") = -1 ENOENT (No such file or directory)
rename("/etc/kubernetes/manifests/kube-scheduler.yaml", "/etc/kubernetes/manifests/kube-scheduler.yaml~") = 0
fsync(4)                                = 0
open("/etc/kubernetes/manifests/kube-scheduler.yaml", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0600) = 3
write(3, "apiVersion: v1\nkind: Pod\nmetadat"..., 860) = 860
fsync(3)                                = 0
stat64("/etc/kubernetes/manifests/kube-scheduler.yaml", {st_mode=S_IFREG|0600, st_size=860, ...}) = 0
stat64("/etc/kubernetes/manifests/kube-scheduler.yaml", {st_mode=S_IFREG|0600, st_size=860, ...}) = 0
close(3)                                = 0
chmod("/etc/kubernetes/manifests/kube-scheduler.yaml", 0100600) = 0
setxattr("/etc/kubernetes/manifests/kube-scheduler.yaml", "system.posix_acl_access", "\2\0\0\0\1\0\6\0\377\377\377\377\4\0\0\0\377\377\377\377 \0\0\0\377\377\377\377", 28, 0) = 0
write(1, " 41L, 860C written", 18)      = 18
stat64("/etc/kubernetes/manifests/kube-scheduler.yaml", {st_mode=S_IFREG|0600, st_size=860, ...}) = 0
unlink("/etc/kubernetes/manifests/kube-scheduler.yaml~") = 0
write(1, "\33[60C1,1\33[11CAll\33[57;126H\33[K", 28) = 28
_newselect(1, [0], [], [0], {0, 0})     = 0 (Timeout)
write(1, "\33[57;126H1,1\33[11CAll\33[1;1H\33[?12l"..., 38) = 38
gettimeofday({1499004150, 103171}, NULL) = 0
gettimeofday({1499004150, 103395}, NULL) = 0
_newselect(1, [0], [], [0], {4, 0})     = 0 (Timeout)
_llseek(4, 0, [0], SEEK_SET)            = 0
write(4, "b0VIM 7.4\0\0\0\0\20\0\0\366\374XY\27\6\24\0S1\0\0root"..., 4096) = 4096

further looking i found that apparently vim is doing this:

So guess our case is about a "rapidly appearing and disappearing" file that uses the syscalls from the strace?

@yujuhong
Copy link
Contributor

further looking i found that apparently vim is doing this:

neovim/neovim#3460
So guess our case is about a "rapidly appearing and disappearing" file that uses the syscalls from the strace?

@asac nice digging! If your editor creates a temporary file which contains exactly the same pod object, this would leave kubelet confused and it will delete the pod when the file is deleted. This is a known issue, and the fix would be for kubelet to scan the content of the directory periodically to ensure the correct pods are started eventually. There is a bug tracking this but I couldn't find it at this moment. Please note that even if kubelet can self-recover by syncing periodically, the temporary can still cause unncessary disruption to your pod/workload (e.g., temporary downtime until the periodic sync kicks in). The best way to avoid disruptions like this is to copy your file to a separate directory, modify it, and then copy it back.

@yujuhong
Copy link
Contributor

This is a known issue, and the fix would be for kubelet to scan the content of the directory periodically to ensure the correct pods are started eventually.

Found it. It's #40123

@asac
Copy link
Author

asac commented Jul 10, 2017 via email

@yujuhong
Copy link
Contributor

I dont see that vim is creating a file with the same content. it just
creates an empty file to see if it can create a file in the directory it
wants to write to with the right flags afaiui ... check out the code here:

If the temporary file did not contain the pod manifest, then that's not the problem. Can you post your kubelet log again but this time, include the messages around/before you edit the file. I don't think the log you posted before is complete

@gogeof
Copy link

gogeof commented Aug 18, 2017

the same problem in v1.7.3

here is my log and system information.

# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T06:43:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

# cat /etc/os
[root@iZj6cdrp41p1oul41r56qoZ manifests]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T06:43:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
[root@iZj6cdrp41p1oul41r56qoZ manifests]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
System: centos-7.3

@gogeof
Copy link

gogeof commented Aug 18, 2017

but if I move kube-apiserver.yaml to other dir except /etc/kubernetes/manifests, and then move back, it working.

@gogeof
Copy link

gogeof commented Nov 18, 2017

@ALL I face a problem shows like this issues describe, and I have add new issues(#55928) because the version of kubernetes is different with this one.
use dlv to debug, I found these:

  1. .kube-apiserver.swp problem already fix
  2. kube-apiserver.yaml be read and parse success, but failed to connect apiserver.
  3. things seem point to why kube-apiserver container not start

@aknrdureegaesr
Copy link

FWIW: When I edited /etc/kubernetes/manifests/kube-apiserver.manifest yesterday, that caused the kubelet v1.7.10 to restart the API server. Reproducibly, several times.

So this problem may have been solved?

@asac
Copy link
Author

asac commented Dec 9, 2017 via email

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 9, 2018
@xiangpengzhao
Copy link
Contributor

/remove-lifecycle stale

Still facing this on v1.9.0. cc @kubernetes/sig-node-bugs @yujuhong

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 21, 2018
@yujuhong
Copy link
Contributor

@xiangpengzhao, do you mind posting the details - how you edited the file, and the associated kubelet log?

@krisss85
Copy link

I was facing the same issue with 1.10 when trying to add oidc parameters. For now the best thing is to avoid editing the manifest in-place. Copying the file from other location works just fine.

@wjrogers
Copy link

wjrogers commented May 2, 2018

I just encountered this issue on a kubeadm cluster version 1.10.1 after editing /etc/kubernetes/manifests/kube-apiserver.yaml with vim. Symptoms look the same as the earlier reports. Writing the file back out with nano brought the kube-apiserver pod back up immediately. (Guess touch would have had the same effect?)

@post2jain
Copy link

I encountered this issue on kube cluster running 1.9.3. After editing /etc/kubernetes/manifests/kube-apiserver.yaml with nano i was able to make updates to config file ( did not work after modifying file with vi)

@dixudx
Copy link
Member

dixudx commented May 23, 2018

Guys, I filed #63910 to solve this issue. Please help review it.

@wklken
Copy link

wklken commented Jun 5, 2018

@dixudx I wonder the #63910 is the same problem as this issue reported.
the numberic temporary file created by vim will case the not restart? If the content of the temporary file is not the same as the source file, kubelet will just complain and do nothing.


I have compiled the part of inotify code, and it turned out the vim backup file case the fail.
The container has been restarted and then been deleted because of the test.yaml~ removed by vim

https://gist.github.com/wklken/145c8d70389c3f11381a1771623a3ba8

Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swp, because it starts with dots
Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swx, because it starts with dots
Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swx, because it starts with dots
Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swp, because it starts with dots
Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swp, because it starts with dots
Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swp, because it starts with dots
Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swp, because it starts with dots
Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swp, because it starts with dots
Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swp, because it starts with dots
"/etc/kubernetes/manifests" /etc/kubernetes/manifests/4913 mask=%!s(uint32=256) and type=%!s(uint32=256) podAdd 1
eventType=%!s(main.podEventType=0)
"/etc/kubernetes/manifests" /etc/kubernetes/manifests/4913 mask=%!s(uint32=512) and type=%!s(uint32=512) podDelete 1
eventType=%!s(main.podEventType=2)
"/etc/kubernetes/manifests" /etc/kubernetes/manifests/test.yaml mask=%!s(uint32=64) and type=%!s(uint32=64) podDelete 2
eventType=%!s(main.podEventType=2)
"/etc/kubernetes/manifests" /etc/kubernetes/manifests/test.yaml~ mask=%!s(uint32=128) and type=%!s(uint32=128) podAdd 2
eventType=%!s(main.podEventType=0)
"/etc/kubernetes/manifests" /etc/kubernetes/manifests/test.yaml mask=%!s(uint32=256) and type=%!s(uint32=256) podAdd 1
eventType=%!s(main.podEventType=0)
"/etc/kubernetes/manifests" /etc/kubernetes/manifests/test.yaml mask=%!s(uint32=2) and type=%!s(uint32=2) podModify 1
eventType=%!s(main.podEventType=1)
Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swp, because it starts with dots
"/etc/kubernetes/manifests" /etc/kubernetes/manifests/test.yaml~ mask=%!s(uint32=512) and type=%!s(uint32=512) podDelete 1
eventType=%!s(main.podEventType=2)
Ignored pod manifest: /etc/kubernetes/manifests/.test.yaml.swp, because it starts with dots

k8s-github-robot pushed a commit that referenced this issue Jun 20, 2018
Automatic merge from submit-queue (batch tested with PRs 58690, 64773, 64880, 64915, 64831). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

ignore not found file error when watching manifests

**What this PR does / why we need it**:
An alternative of #63910.

When using vim to create a new file in manifest folder, a temporary file, with an arbitrary number (like 4913) as its name, will be created to check if a directory is writable and see the resulting ACL.

These temporary files will be deleted later, which should by ignored when watching the manifest folder.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #55928, #59009, #48219

**Special notes for your reviewer**:
/cc dims luxas yujuhong liggitt tallclair

**Release note**:

```release-note
ignore not found file error when watching manifests
```
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2018
@mshivanna
Copy link

mshivanna commented Sep 24, 2018

I have the same issue i tried editing the kube-apiserver.yaml using nano but somehow I am not able to connect to the apiserver. I get cannot connect to the apiserver error.
I was trying to enable podpresets on our cluster which was provisioned using kubeadm.
- --runtime-config=apiserver.k8s.io/v1alpha1=true
- --admission-control=PodPreset,Initializers,GenericAdmissionWebhook,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
These two are the parameters i was trying to add to the yaml file.

@netdisciple
Copy link

iirc, you need to restart the kubelet. the api container somehow knows the manifest was updated and reloads it.

@mshivanna
Copy link

mshivanna commented Sep 24, 2018

thank you netdisciple. i had an older parameter admission-control but i need to use - --enable-admission-plugins instead. now it works thank you.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 24, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests