Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nodes fail to join cluster during update to v1.22.3 #13118

Closed
tobiasamft opened this issue Jan 18, 2022 · 17 comments · Fixed by #13158
Closed

Nodes fail to join cluster during update to v1.22.3 #13118

tobiasamft opened this issue Jan 18, 2022 · 17 comments · Fixed by #13158
Labels
blocks-next kind/bug Categorizes issue or PR as related to a bug. kind/office-hours

Comments

@tobiasamft
Copy link

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

Version 1.22.3 (git-241bfeba5931838fd32f2260aff41dd89a585fba)

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:10:45Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:32:32Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}

3. What cloud provider are you using?

AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
Upgrade a (freshly created) v1.22.2 cluster to v1.22.3

5. What happened after the commands executed?

kops rolling-update cluster --name debug.k8s.xxx --yes

NAME			STATUS		NEEDUPDATE	READY	MIN	TARGET	MAX	NODES
master-eu-west-1a	NeedsUpdate	1		0	1	1	1	1
master-eu-west-1b	NeedsUpdate	1		0	1	1	1	1
master-eu-west-1c	NeedsUpdate	1		0	1	1	1	1
nodes-eu-west-1a	NeedsUpdate	1		0	1	1	18	1
I0118 14:43:09.151938   57091 instancegroups.go:468] Validating the cluster.
I0118 14:43:10.698777   57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": system-node-critical pod "canal-sxhzt" is pending.
I0118 14:43:42.276409   57091 instancegroups.go:501] Cluster validated.
I0118 14:43:42.276441   57091 instancegroups.go:309] Tainting 1 node in "master-eu-west-1a" instancegroup.
I0118 14:43:42.329801   57091 instancegroups.go:398] Draining the node: "ip-172-21-23-11.eu-west-1.compute.internal".
WARNING: ignoring DaemonSet-managed Pods: kube-system/canal-97td6, kube-system/ebs-csi-node-gtcz7, kube-system/kops-controller-b5j7s
evicting pod kube-system/ebs-csi-controller-6d77db8bf5-6mrjt
evicting pod kube-system/aws-node-termination-handler-b9dd66b74-k7knr
evicting pod kube-system/cluster-autoscaler-6b59b997d-n9c8f
I0118 14:44:13.766391   57091 instancegroups.go:656] Waiting for 5s for pods to stabilize after draining.
I0118 14:44:18.766825   57091 instancegroups.go:417] deleting node "ip-172-21-23-11.eu-west-1.compute.internal" from kubernetes
I0118 14:44:18.804204   57091 instancegroups.go:589] Stopping instance "i-0ad91624557825b4d", node "ip-172-21-23-11.eu-west-1.compute.internal", in group "master-eu-west-1a.masters.debug.k8s.xxx" (this may take a while).
I0118 14:44:18.954446   57091 instancegroups.go:435] waiting for 15s after terminating instance
I0118 14:44:33.955323   57091 instancegroups.go:468] Validating the cluster.
I0118 14:44:35.620995   57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": InstanceGroup "master-eu-west-1a" did not have enough nodes 0 vs 1.
I0118 14:45:07.067841   57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": InstanceGroup "master-eu-west-1a" did not have enough nodes 0 vs 1.
I0118 14:45:38.815990   57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": InstanceGroup "master-eu-west-1a" did not have enough nodes 0 vs 1.
I0118 14:46:10.294351   57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": InstanceGroup "master-eu-west-1a" did not have enough nodes 0 vs 1.
I0118 14:46:42.046865   57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": machine "i-0524ca9a89f016541" has not yet joined cluster.
I0118 14:47:13.883391   57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": machine "i-0524ca9a89f016541" has not yet joined cluster.
I0118 14:47:45.788972   57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": machine "i-0524ca9a89f016541" has not yet joined cluster.
I0118 14:48:17.614813   57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": machine "i-0524ca9a89f016541" has not yet joined cluster.

6. What did you expect to happen?
Proper cluster update without errors

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

---
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: null
  name: debug.k8s.xxx
spec:
  api:
    loadBalancer:
      class: Classic
      type: Internal
  authorization:
    rbac: {}
  channel: stable
  cloudLabels:
    Environment: debug
    Owner: cloud-ops
  cloudProvider: aws
  configBase: s3://xxx/debug.k8s.xxx
  containerRuntime: containerd
  clusterAutoscaler:
    enabled: true
    balanceSimilarNodeGroups: true
    scaleDownUtilizationThreshold: "0.8"
    skipNodesWithLocalStorage: false
    cpuRequest: "100m"
    memoryRequest: "800Mi"
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-eu-west-1a
      name: a
    - encryptedVolume: true
      instanceGroup: master-eu-west-1b
      name: b
    - encryptedVolume: true
      instanceGroup: master-eu-west-1c
      name: c
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-eu-west-1a
      name: a
    - encryptedVolume: true
      instanceGroup: master-eu-west-1b
      name: b
    - encryptedVolume: true
      instanceGroup: master-eu-west-1c
      name: c
    memoryRequest: 100Mi
    name: events
  externalPolicies:
    node:
    - arn:aws:iam::xxx:policy/tf-kops-debug-node
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    featureGates:
      TTLAfterFinished: "true"
  kubeControllerManager:
    featureGates:
      TTLAfterFinished: "true"
  kubeDNS:
    nodeLocalDNS:
      enabled: false
    provider: CoreDNS
  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook
    cpuCFSQuota: false
  kubernetesApiAccess:
  - 172.16.0.0/22
  - 172.16.12.0/22
  - 172.21.60.165/32
  - 172.21.60.47/32
  kubernetesVersion: 1.22.5
  masterInternalName: api.internal.debug.k8s.xxx
  masterPublicName: api.debug.k8s.xxx
  networkCIDR: 172.21.0.0/16
  networkID: vpc-xxx
  networking:
    canal: {}
  nodeTerminationHandler:
    enabled: true
    enableSQSTerminationDraining: true
    managedASGTag: kubernetes.io/cluster/debug.k8s.xxx
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 172.16.0.0/22
  - 172.21.60.47/32
  subnets:
  - cidr: 172.21.23.0/24
    egress: nat-xxx
    id: subnet-xxx
    name: eu-west-1a
    type: Private
    zone: eu-west-1a
  - cidr: 172.21.24.0/24
    egress: nat-xxx
    id: subnet-61168d17
    name: eu-west-1b
    type: Private
    zone: eu-west-1b
  - cidr: 172.21.25.0/24
    egress: nat-xxx
    id: subnet-xxx
    name: eu-west-1c
    type: Private
    zone: eu-west-1c
  - cidr: 172.21.20.0/24
    id: subnet-xxx
    name: utility-eu-west-1a
    type: Utility
    zone: eu-west-1a
  - cidr: 172.21.21.0/24
    id: subnet-xxx
    name: utility-eu-west-1b
    type: Utility
    zone: eu-west-1b
  - cidr: 172.21.22.0/24
    id: subnet-xxx
    name: utility-eu-west-1c
    type: Utility
    zone: eu-west-1c
  topology:
    dns:
      type: Public
    masters: private
    nodes: private
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: debug.k8s.xxx
  name: master-eu-west-1a
spec:
  additionalSecurityGroups: [sg-xxx]
  cloudLabels:
    Environment: debug
    Owner: cloud-ops
  image: 075585003325/Flatcar-stable-3033.2.0-hvm
  machineType: m5.2xlarge
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-eu-west-1a
  role: Master
  subnets:
  - eu-west-1a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: debug.k8s.xxx
  name: master-eu-west-1b
spec:
  additionalSecurityGroups: [sg-xxx]
  cloudLabels:
    Environment: debug
    Owner: cloud-ops
  image: 075585003325/Flatcar-stable-3033.2.0-hvm
  machineType: m5.2xlarge
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-eu-west-1b
  role: Master
  subnets:
  - eu-west-1b

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: debug.k8s.xxx
  name: master-eu-west-1c
spec:
  additionalSecurityGroups: [sg-xxx]
  cloudLabels:
    Environment: debug
    Owner: cloud-ops
  image: 075585003325/Flatcar-stable-3033.2.0-hvm
  machineType: m5.2xlarge
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-eu-west-1c
  role: Master
  subnets:
  - eu-west-1c
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: debug.k8s.xxx
  name: nodes-eu-west-1a
spec:
  additionalSecurityGroups: [sg-xxx]
  cloudLabels:
    Environment: debug
    Owner: cloud-ops
    k8s.io/cluster-autoscaler/enabled: ""
    k8s.io/cluster-autoscaler/debug: ""
  image: 075585003325/Flatcar-stable-3033.2.0-hvm
  machineType: m5.xlarge
  maxSize: 18
  minSize: 2
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-eu-west-1a
    type: node
  role: Node
  subnets:
  - eu-west-1a

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know?
Might be related to issue #13116.

Journalctl (partial) log of the instance which is not able to join the cluster (starting at first error E0118):

Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577519    2224 server.go:1006] "Cloud provider determined current node" nodeName="ip-172-21-23-101.eu-west-1.compute.internal"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577537    2224 server.go:1148] "Using root directory" path="/var/lib/kubelet"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577594    2224 kubelet.go:418] "Attempting to sync node with API server"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577607    2224 kubelet.go:279] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577622    2224 file.go:68] "Watching path" path="/etc/kubernetes/manifests"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577633    2224 kubelet.go:290] "Adding apiserver pod source"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577655    2224 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.578696    2224 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-21-23-101.eu-west-1.compute.internal&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.579233    2224 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.581615    2224 kuberuntime_manager.go:245] "Container runtime initialized" containerRuntime="containerd" version="1.5.8" apiVersion="v1alpha2"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: W0118 11:53:58.581906    2224 probe.go:268] Flexvolume plugin directory at /var/lib/kubelet/volumeplugins/ does not exist. Recreating.
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582065    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582085    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/cinder"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582098    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582109    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582121    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582137    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582149    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/git-repo"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582160    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/host-path"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582172    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/nfs"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582185    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/secret"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582207    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582219    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/glusterfs"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582233    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582246    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/quobyte"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582257    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/cephfs"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582269    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/downward-api"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582283    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/fc"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582294    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582306    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/configmap"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582318    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/projected"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582340    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582352    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582363    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582398    2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582508    2224 server.go:1213] "Started kubelet"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582585    2224 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582808    2224 server.go:176] "Starting to listen read-only" address="0.0.0.0" port=10255
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.583148    2224 csi_plugin.go:1057] Failed to contact API server when waiting for CSINode publishing: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes/ip-172-21-23-101.eu-west-1.compute.internal": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.583158    2224 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-21-23-101.eu-west-1.compute.internal.16cb5b446edbc9bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-21-23-101.eu-west-1.compute.internal", UID:"ip-172-21-23-101.eu-west-1.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-21-23-101.eu-west-1.compute.internal"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc071c875a2b7edbd, ext:7003076296, loc:(*time.Location)(0x77b0760)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc071c875a2b7edbd, ext:7003076296, loc:(*time.Location)(0x77b0760)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://127.0.0.1/api/v1/namespaces/default/events": dial tcp 127.0.0.1:443: connect: connection refused'(may retry after sleeping)
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.583522    2224 server.go:409] "Adding debug handlers to kubelet server"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.585799    2224 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.585807    2224 cri_stats_provider.go:372] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.585836    2224 kubelet.go:1343] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.585850    2224 volume_manager.go:289] "The desired_state_of_world populator starts"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.585861    2224 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.585942    2224 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.586164    2224 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://127.0.0.1/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-21-23-101.eu-west-1.compute.internal?timeout=10s": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.586728    2224 kubelet.go:2337] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.587253    2224 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.587723    2224 factory.go:137] Registering containerd factory
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.587802    2224 factory.go:55] Registering systemd factory
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.603212    2224 factory.go:372] Registering Docker factory
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.603268    2224 factory.go:101] Registering Raw factory
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.603302    2224 manager.go:1203] Started watching for new ooms in manager
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.603718    2224 manager.go:301] Starting recovery of all containers
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.609970    2224 manager.go:306] Recovery completed
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.621301    2224 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.643154    2224 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.643177    2224 status_manager.go:158] "Starting to sync pod status with apiserver"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.643191    2224 kubelet.go:1967] "Starting kubelet main sync loop"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.643245    2224 kubelet.go:1991] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.643849    2224 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://127.0.0.1/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649012    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649035    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649044    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649053    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649060    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649068    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649076    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686184    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686211    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.686212    2224 kubelet.go:2412] "Error getting node" err="node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686221    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686235    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686243    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686252    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686259    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.743353    2224 kubelet.go:1991] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.786359    2224 kubelet.go:2412] "Error getting node" err="node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.786687    2224 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://127.0.0.1/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-21-23-101.eu-west-1.compute.internal?timeout=10s": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845244    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845283    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845295    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845244    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845357    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845373    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845397    2224 kubelet_node_status.go:71] "Attempting to register node" node="ip-172-21-23-101.eu-west-1.compute.internal"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845729    2224 cpu_manager.go:209] "Starting CPU manager" policy="none"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.845743    2224 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://127.0.0.1/api/v1/nodes\": dial tcp 127.0.0.1:443: connect: connection refused" node="ip-172-21-23-101.eu-west-1.compute.internal"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845748    2224 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845767    2224 state_mem.go:36] "Initialized new in-memory state store"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.848752    2224 policy_none.go:49] "None policy: Start"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.849077    2224 memory_manager.go:168] "Starting memorymanager" policy="None"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.849099    2224 state_mem.go:35] "Initializing new in-memory state store"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods.slice.
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable.slice.
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-besteffort.slice.
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880211    2224 manager.go:245] "Starting Device Plugin manager"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880272    2224 manager.go:609] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880451    2224 manager.go:287] "Serving device plugin registration server on socket" path="/var/lib/kubelet/device-plugins/kubelet.sock"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880530    2224 plugin_watcher.go:52] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880622    2224 plugin_manager.go:112] "The desired_state_of_world populator (plugin watcher) starts"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880636    2224 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.881013    2224 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.886884    2224 kubelet.go:2412] "Error getting node" err="node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944210    2224 kubelet.go:2053] "SyncLoop ADD" source="file" pods=[kube-system/etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal kube-system/etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal kube-system/kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal kube-system/kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal kube-system/kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal kube-system/kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal]
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944264    2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944297    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944315    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944326    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944335    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944344    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944353    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944361    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960321    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960353    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960365    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960434    2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960461    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960472    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960482    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960491    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960498    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960507    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960514    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960606    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960628    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960642    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960651    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960659    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960669    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960676    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975731    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975773    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975790    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975881    2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975911    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975923    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975932    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975940    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975947    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975956    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975963    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976076    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976096    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976109    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976122    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976133    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976145    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976156    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976164    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976191    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976207    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976682    2224 status_manager.go:601] "Failed to get status for pod" podUID=9eb4446c47f04b03ac89adf2bdc97326 pod="kube-system/etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-pod9eb4446c47f04b03ac89adf2bdc97326.slice.
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.987327    2224 kubelet.go:2412] "Error getting node" err="node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987441    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-run\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987496    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudconfig\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-cloudconfig\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987534    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logfile\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-logfile\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987587    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srvkapi\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvkapi\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987634    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-secrets\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-healthcheck-secrets\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987674    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varlogetcd\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-varlogetcd\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987711    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varlogetcd\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-varlogetcd\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987752    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usrshareca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-usrshareca-certificates\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987825    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-rootfs\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987874    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-pki\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987909    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-rootfs\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987972    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcssl\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcssl\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988027    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcpkitls\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkitls\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988066    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcpkica-trust\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkica-trust\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988103    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetesca\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-kubernetesca\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988138    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srvsshproxy\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvsshproxy\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988172    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-run\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988204    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-pki\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991847    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991877    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991881    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991889    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991905    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991919    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991977    2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992022    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992033    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992042    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992050    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992058    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992066    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992067    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992082    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992085    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992101    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992114    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992125    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992133    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992141    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992297    2224 status_manager.go:601] "Failed to get status for pod" podUID=6a8ab1f587e4e906d109d5c2ce7aeaec pod="kube-system/etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-pod6a8ab1f587e4e906d109d5c2ce7aeaec.slice.
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008271    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008299    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008310    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008399    2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008424    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008438    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008449    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008458    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008465    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008474    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008480    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008481    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008497    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008507    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008518    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008526    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008534    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008579    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008631    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008664    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008682    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.009171    2224 status_manager.go:601] "Failed to get status for pod" podUID=b3a03e31b3a1405e5ef70661b08e2e1d pod="kube-system/kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024601    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024631    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024641    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024729    2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024755    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024766    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024775    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024783    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024791    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024810    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024818    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024893    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024916    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024931    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024931    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024944    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024955    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024971    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024956    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024984    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024994    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.025376    2224 status_manager.go:601] "Failed to get status for pod" podUID=207483a4a41b602af202284e46394181 pod="kube-system/kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-podb3a03e31b3a1405e5ef70661b08e2e1d.slice.
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-pod207483a4a41b602af202284e46394181.slice.
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042387    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042423    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042439    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042463    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042486    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042501    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042713    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042734    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042747    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042759    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042769    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042784    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042796    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042826    2224 status_manager.go:601] "Failed to get status for pod" podUID=1307b8791492862e49797fda5735eae1 pod="kube-system/kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046656    2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046682    2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046692    2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046701    2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046709    2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046717    2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046725    2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.057650    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.057681    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.057692    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.058045    2224 status_manager.go:601] "Failed to get status for pod" podUID=1eb5c698134cf5bd61561bc378175f09 pod="kube-system/kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.061139    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.061178    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.061194    2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.061217    2224 kubelet_node_status.go:71] "Attempting to register node" node="ip-172-21-23-101.eu-west-1.compute.internal"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-pod1307b8791492862e49797fda5735eae1.slice.
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-pod1eb5c698134cf5bd61561bc378175f09.slice.
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:59.087739    2224 kubelet.go:2412] "Error getting node" err="node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088846    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-pki\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088884    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"usrshareca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-usrshareca-certificates\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088910    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcpkitls\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-etcpkitls\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088934    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srvkcm\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-srvkcm\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088948    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "usrshareca-certificates" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-usrshareca-certificates") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088959    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volplugins\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-volplugins\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088953    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "pki" (UniqueName: "kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-pki") pod "etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "6a8ab1f587e4e906d109d5c2ce7aeaec")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088983    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-pki\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089018    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "pki" (UniqueName: "kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-pki") pod "etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "9eb4446c47f04b03ac89adf2bdc97326")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089024    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etcpkica-trust\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkica-trust\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089047    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "etcpkica-trust" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkica-trust") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089096    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"srvsshproxy\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvsshproxy\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089120    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "srvsshproxy" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvsshproxy") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089155    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1307b8791492862e49797fda5735eae1-kubeconfig\") pod \"kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1307b8791492862e49797fda5735eae1\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089186    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logfile\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-logfile\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089226    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varlibkcm\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-varlibkcm\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089260    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-run\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089296    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "run" (UniqueName: "kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-run") pod "etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "6a8ab1f587e4e906d109d5c2ce7aeaec")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089326    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"varlogetcd\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-varlogetcd\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089414    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"varlogetcd\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-varlogetcd\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089418    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "varlogetcd" (UniqueName: "kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-varlogetcd") pod "etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "9eb4446c47f04b03ac89adf2bdc97326")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089462    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"logfile\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-logfile\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089483    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "varlogetcd" (UniqueName: "kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-varlogetcd") pod "etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "6a8ab1f587e4e906d109d5c2ce7aeaec")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089492    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"healthcheck-secrets\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-healthcheck-secrets\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089513    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logfile\" (UniqueName: \"kubernetes.io/host-path/1eb5c698134cf5bd61561bc378175f09-logfile\") pod \"kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1eb5c698134cf5bd61561bc378175f09\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089566    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usrshareca-certificates\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-usrshareca-certificates\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089571    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "logfile" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-logfile") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089591    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "healthcheck-secrets" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-healthcheck-secrets") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089600    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudconfig\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-cloudconfig\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089637    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-rootfs\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089660    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srvscheduler\" (UniqueName: \"kubernetes.io/host-path/1eb5c698134cf5bd61561bc378175f09-srvscheduler\") pod \"kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1eb5c698134cf5bd61561bc378175f09\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089672    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "rootfs" (UniqueName: "kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-rootfs") pod "etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "9eb4446c47f04b03ac89adf2bdc97326")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089691    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-run\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089725    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-rootfs\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089736    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "run" (UniqueName: "kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-run") pod "etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "9eb4446c47f04b03ac89adf2bdc97326")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089749    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etcssl\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcssl\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089774    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "rootfs" (UniqueName: "kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-rootfs") pod "etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "6a8ab1f587e4e906d109d5c2ce7aeaec")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089780    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "etcssl" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcssl") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089775    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etcpkitls\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkitls\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089797    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "etcpkitls" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkitls") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089818    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubernetesca\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-kubernetesca\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089838    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logfile\" (UniqueName: \"kubernetes.io/host-path/1307b8791492862e49797fda5735eae1-logfile\") pod \"kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1307b8791492862e49797fda5735eae1\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089856    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "kubernetesca" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-kubernetesca") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089863    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varlibkubescheduler\" (UniqueName: \"kubernetes.io/host-path/1eb5c698134cf5bd61561bc378175f09-varlibkubescheduler\") pod \"kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1eb5c698134cf5bd61561bc378175f09\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089888    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cloudconfig\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-cloudconfig\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089908    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/1307b8791492862e49797fda5735eae1-modules\") pod \"kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1307b8791492862e49797fda5735eae1\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089914    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "cloudconfig" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-cloudconfig") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089927    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptableslock\" (UniqueName: \"kubernetes.io/host-path/1307b8791492862e49797fda5735eae1-iptableslock\") pod \"kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1307b8791492862e49797fda5735eae1\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089949    2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"srvkapi\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvkapi\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089970    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-hosts\" (UniqueName: \"kubernetes.io/host-path/1307b8791492862e49797fda5735eae1-ssl-certs-hosts\") pod \"kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1307b8791492862e49797fda5735eae1\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089988    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcssl\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-etcssl\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.090001    2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "srvkapi" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvkapi") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.090010    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcpkica-trust\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-etcpkica-trust\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.090034    2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-cabundle\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:59.179260    2224 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://127.0.0.1/api/v1/nodes\": dial tcp 127.0.0.1:443: connect: connection refused" node="ip-172-21-23-101.eu-west-1.compute.internal"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:59.187695    2224 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://127.0.0.1/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-21-23-101.eu-west-1.compute.internal?timeout=10s": dial tcp 127.0.0.1:443: connect: connection refused
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 18, 2022
@zetaab
Copy link
Member

zetaab commented Jan 19, 2022

@tobiasamft can you try downgrading containerd version to 1.5.5 and try again see #13126

you can specify to config like

spec:
  ...
  containerd:
    version: 1.5.5

@tobiasamft
Copy link
Author

@zetaab I tried out different things:

  • used flatcar linux (see cluster config above) which comes with containerd version fixed to 1.5.8 which can't be downgraded /changed
  • also used ubuntu (image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211118) which comes with containerd version 1.4.12
  • also used ubuntu with containerd explicitly set to version 1.5.5

The result is the same for all distributions and containerd versions: upgrade fails, new node can't join the cluster.

Additional findings:

  • with kops 1.22.3 only the following containers are started (in flatcar linux):

sudo ctr -n k8s.io c ls
CONTAINER                                                           IMAGE                                                RUNTIME
0d2a8366b19e3c6b28aa34bf3b7e461a23f1653ddaf6d35476b8cfb250c7d7d7    k8s.gcr.io/kube-controller-manager:v1.22.5           io.containerd.runc.v2
11f3234a46d81f7949bacf849242d7e9c64adae1853465bf82d0cc23894d6539    k8s.gcr.io/etcdadm/etcd-manager:v3.0.20211124        io.containerd.runc.v2
3004bcaf3ff169549910524b3db15b24c287f73d1e8a7c3b1d661ec2721f2d1a    k8s.gcr.io/pause:3.5                                 io.containerd.runc.v2
3089bb82625176d7d506a213fe45c5e7f7006acd46700758fea98f24681519d2    k8s.gcr.io/etcdadm/etcd-manager:v3.0.20211124        io.containerd.runc.v2
46b2d81d699bd0453ff0c68d2ae8d59cfc29faef7191ff38d296f6e5fd35b586    k8s.gcr.io/pause:3.5                                 io.containerd.runc.v2
731873cf8b85f49f661c7e6a5d6d49d42607b3a309e2b7ed3513bb4122b9abca    k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.3    io.containerd.runc.v2
7607334b0485e74342d54a29fc63f60055dd38a8836081f7d2d283fd31420dd3    k8s.gcr.io/pause:3.5                                 io.containerd.runc.v2
7acd3d840fd17d19b0e07f19c711206a9a1e9f6c3bafbcc71a2ff649464c0710    k8s.gcr.io/pause:3.5                                 io.containerd.runc.v2
8031390f5ba069fb8b747c723bda7118751472dfc696e793fa01e90b0954388c    k8s.gcr.io/kube-apiserver:v1.22.5                    io.containerd.runc.v2
8bdc08e92178f59bd6d294e55640246566662149de33d309478c856a8b8c67ad    k8s.gcr.io/pause:3.5                                 io.containerd.runc.v2
a960626ecf2148d6c28be099723dca5a1906316168991eef511ff82a56b25f18    k8s.gcr.io/kube-scheduler:v1.22.5                    io.containerd.runc.v2
ae79485e5a0ab56cb00f8edf270576277f625889ff0836c6437d1bed88231954    k8s.gcr.io/kube-proxy:v1.22.5                        io.containerd.runc.v2
b7f3de89f4655be2c1aa855b75acf396c18e1d95633948da3acd69d5a3a7f8b8    k8s.gcr.io/pause:3.5                                 io.containerd.runc.v2
e71f65e17512cbd104b4e8fb45bdc43b8a5fbb92e4e8b9985df889993957443f    k8s.gcr.io/kube-apiserver:v1.22.5                    io.containerd.runc.v2
  • whereas with kops 1.22.2 the following containers are started:
sudo ctr -n k8s.io c ls
CONTAINER                                                           IMAGE                                                      RUNTIME
0184913823a1bc572a385466487881d52aaec4e7a0bd9aac0cb0673f74a712cc    k8s.gcr.io/sig-storage/livenessprobe:v2.2.0                io.containerd.runc.v2
01d2126f408202bca4897c93ac45e72bd6b71b3b17314b881388b76aea4bd4b2    k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0              io.containerd.runc.v2
1695396a35b0ede43f145ed4fd9caa30445c65a558b4ea6e609a9736a7de9a53    k8s.gcr.io/autoscaling/cluster-autoscaler:v1.22.1          io.containerd.runc.v2
1b49105e765f2c01d8da33f7d61b2b816591ea61cecd9f8ad76a61988443ce16    k8s.gcr.io/kube-apiserver:v1.22.5                          io.containerd.runc.v2
1d506880dc3ecf70910e4e77176b04e855d74dbf3a5318335dc45e099b761fd5    k8s.gcr.io/etcdadm/etcd-manager:v3.0.20211117              io.containerd.runc.v2
250a76db9e6c2ce5a672308ff6e44e5634d98acec02e3bc1c940dbb098e143ab    k8s.gcr.io/kops/kops-controller:1.22.2                     io.containerd.runc.v2
30bcc0e16382b07e2cef95a9bf5ceec3f0145a1e6f9726bdccc32d4122e0747b    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
39d17765e98d4a13e5a937b749548a162460416dbf9e95d24a309462921b7154    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
4095e72c1b8f9c6bf03a901b2da7867520b48f23ac50f12bf73d9ca8ce196581    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
570a502fcf71d1991003fb145b56c78f2a4e7e5b0725ca42e50fd423ca590ff3    k8s.gcr.io/kube-controller-manager:v1.22.5                 io.containerd.runc.v2
6125728b3dd210232b2b095716aeb188da9701bc411c85e2ccfc5c467b7d519e    k8s.gcr.io/sig-storage/livenessprobe:v2.2.0                io.containerd.runc.v2
6266f8f1e9553fddc67f736a7a1b5c690c84c4585f3cfc353e06eaed8bc97bb2    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
6f81f3ee3fcb4275f971c1853c1304e7ef787292b02a66f521898962b3c5501d    k8s.gcr.io/kube-scheduler:v1.22.5                          io.containerd.runc.v2
73164f35b5817a20b2e2ce85835c9523e87a776801f9d1e78b8e2fb12b32197f    k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0              io.containerd.runc.v2
797858b8f790eb3beeb80daee1972dae35e4c0f810d42bf38cffb8ddf77e1b04    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
7a3ee0345c4410892cd81d4fe20b088ea6d2888c12f154f4c649d3db2b436e33    k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.2.1          io.containerd.runc.v2
7b1ca4c156616eb52814e63e4f3529685caa2cdebc59cb3bba9a6941ea702c7e    k8s.gcr.io/kube-proxy:v1.22.5                              io.containerd.runc.v2
7ee2824f35b70fb43613997f630c2c7dc964986bb45698eeaf3b328702d611f0    k8s.gcr.io/sig-storage/csi-attacher:v3.2.0                 io.containerd.runc.v2
8e1df0bde303037d71a9c18ed066b0136dec79f80d13903ab1d5725ef9ccc034    k8s.gcr.io/kube-controller-manager:v1.22.5                 io.containerd.runc.v2
9056ea700dcf539b9427acded35e22546777b62e5e72aeb6d6b492e253ccc814    k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0    io.containerd.runc.v2
91a4298c2a2cb3bfa96fd052622c8a31de9c246a1b4b67cf22035f2a27bc2739    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
9dcf9e6335a1f60df730fad628e5cc4af366978adf7f3b37579503d2a0ae7b9b    k8s.gcr.io/etcdadm/etcd-manager:v3.0.20211117              io.containerd.runc.v2
a0786405a137f953d274586f59802f098f25059dec6f4b50c170037829cdd0fe    docker.io/calico/cni:v3.20.3                               io.containerd.runc.v2
a7368242815722e1740b0e49f80e49dc455c5abac12c696bb187f6383c2a41af    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
a7398f15f56660bcc43a7b623cfce924640f1717699bd2e4435fae4e3a5d9015    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
bb9a8dcb86ba2dad6b37b916cfdb6f5007a0ce987160c665169e50da9deab9b5    quay.io/coreos/flannel:v0.14.0                             io.containerd.runc.v2
c0bdca0d0bc5255bf1ff12aa46bbf7ca8def1cd84ba6774c69fa4c9f4d47c2e5    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
c847bea0019ac4b856e0f369c05a5c71780d511d465f71cb5c056073b8d36e3b    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
c95e7418408170bd11ed37655bf13c04fcab3883153e0d91d4077adea49e95d1    k8s.gcr.io/sig-storage/csi-resizer:v1.1.0                  io.containerd.runc.v2
cfaba5ee4e6f1b8af9a51452112911e2c41152eb370ad54edfce85c8d9eb8393    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
d7738303f1263fc86713d549659813d8e65ec8e1cebf7e72c410acaf8aeb4d24    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
da86d581a1fa487e854c48816b1d73337484786efbc03161d984ca1ceeb15704    k8s.gcr.io/kops/dns-controller:1.22.3                      io.containerd.runc.v2
dbd860de01a346c1720bf76b83c02899b862bd04364e504ae0efbc17eb0b1adb    k8s.gcr.io/kube-apiserver:v1.22.5                          io.containerd.runc.v2
e54f2a780bfeb55c3903bc8a830bcb5566af2ba4d8d49a7fa96a16818a2f651e    k8s.gcr.io/pause:3.5                                       io.containerd.runc.v2
eddc1da639002bc086d3e5e3c609499e5d890f238e37b19a125be5ff161669c9    k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.2.1          io.containerd.runc.v2
ef607ac4ea45bea3eab78b457e50b5206b6d7a8e9d2a08262d785f6efa57d7a1    k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.2          io.containerd.runc.v2
f46ffd6db037b0c6f17659c6384b1fc54d8a5dc28e5584d777167af24672846b    docker.io/calico/pod2daemon-flexvol:v3.20.3                io.containerd.runc.v2
f9d8596bdfaa07fe76b9ef93ebb14da39bfe191811eee7ed9f32ae60dd0d9d38    docker.io/calico/node:v3.20.3

@zetaab
Copy link
Member

zetaab commented Jan 21, 2022

ok then this sounds different problem. For me it looks like you have problems in either etcd or kube-apiserver. Check the logs under /var/log/containers

@tobiasamft
Copy link
Author

tobiasamft commented Jan 24, 2022

We had the following findings and don't know how to classify them:

  • CNI seems to not come up properly:
    "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
    
  • kube-apiserver is not listening on port 443
  • etcd 'in place upgrade' from 3.5.0 to 3.5.1 seems to fail:
    doing in-place upgrade to "3.5.1"
    unexpected error running etcd cluster reconciliation loop: cannot upgrade/downgrade cluster when not all members are healthy
    

@erismaster
Copy link
Contributor

I also experienced the etcd issue when upgrading a kops cluster running 1.22.4 (with kops 1.22.1) to 1.22.5 (With kops 1.22.2). I got stuck in the same upgrade failed loop and had to forcefully downgrade using kops 1.22.1 and killing off the control plane so it booted back up with etcd 3.5.0.

@olemarkus
Copy link
Member

CNI won't work when apiserver isn't working, and apiserver won't work if etcd isn't working. I think you need to do a bit of triaging to find the member that isn't healthy and eventually what it complains about

@olemarkus
Copy link
Member

@erismaster I encourage you to use kops 1.22.3, which comes with a newer version of etcd-manager that has some fixes for 3.5.x

@tobiasamft
Copy link
Author

We had a dedicated look at the etcd logs which revealed the following:

  • It seems that etcd version v3.5.0 can not be found (see below) although logs at a later point say 'want "3.5.1", h
    ave "3.5.0"' (last code-box below)
  • Thus, etcdClusterState is not healthy (see below)
  • Don't know if it matters but it seems that etcd v3.5.0 has been fully replaced with v3.5.1 in Replace etcd v3.5.0 with etcd v3.5.1 kubernetes-sigs/etcdadm#261
2022-01-25T11:07:36.069587099Z stdout F W0125 11:07:36.069491    3574 etcdserver.go:118] error running etcd: unknown etcd version v3.5.0: not found in [/opt/etcd-v3.5.0-linux-amd64]
2022-01-25T11:07:56.494281096Z stdout F W0125 11:07:56.494165    3574 etcdserver.go:118] error running etcd: unknown etcd version v3.5.0: not found in [/opt/etcd-v3.5.0-linux-amd64]
2022-01-25T11:07:58.276579447Z stdout F {"level":"warn","ts":"2022-01-25T11:07:58.275Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker
failed","target":"etcd-endpoints://0xc0000c8380/etcd-a.internal.debug.k8s.ivx.cloud:4001","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last c
onnection error: connection error: desc = \"transport: Error while dialing dial tcp 172.21.23.174:4001: connect: connection refused\""}
2022-01-25T11:07:58.276595605Z stdout F W0125 11:07:58.275508    3574 controller.go:710] unable to reach member for ListMembers etcdClusterPeerInfo{peer=peer{id:"etcd-a" endpoints:"1
72.21.23.174:3996" }, info=cluster_name:"etcd" node_configuration:<name:"etcd-a" peer_urls:"https://etcd-a.internal.debug.k8s.ivx.cloud:2380" client_urls:"https://etcd-a.internal.deb
ug.k8s.ivx.cloud:4001" quarantined_client_urls:"https://etcd-a.internal.debug.k8s.ivx.cloud:3994" > etcd_state:<cluster:<cluster_token:"pgEBY47MpwnF001Y5Q6mzg" nodes:<name:"etcd-b" p
eer_urls:"https://etcd-b.internal.debug.k8s.ivx.cloud:2380" client_urls:"https://etcd-b.internal.debug.k8s.ivx.cloud:4001" quarantined_client_urls:"https://etcd-b.internal.debug.k8s.
ivx.cloud:3994" tls_enabled:true > nodes:<name:"etcd-c" peer_urls:"https://etcd-c.internal.debug.k8s.ivx.cloud:2380" client_urls:"https://etcd-c.internal.debug.k8s.ivx.cloud:4001" qu
arantined_client_urls:"https://etcd-c.internal.debug.k8s.ivx.cloud:3994" tls_enabled:true > nodes:<name:"etcd-a" peer_urls:"https://etcd-a.internal.debug.k8s.ivx.cloud:2380" client_u
rls:"https://etcd-a.internal.debug.k8s.ivx.cloud:4001" quarantined_client_urls:"https://etcd-a.internal.debug.k8s.ivx.cloud:3994" tls_enabled:true > > etcd_version:"3.5.0" > }: conte
xt deadline exceeded
2022-01-25T11:08:06.896711228Z stdout F I0125 11:08:06.896548    3574 certs.go:211] generating certificate for "etcd-a"
2022-01-25T11:08:06.899461587Z stdout F W0125 11:08:06.899247    3574 etcdserver.go:118] error running etcd: unknown etcd version v3.5.0: not found in [/opt/etcd-v3.5.0-linux-amd64]
2022-01-25T11:08:08.333776536Z stdout F {"level":"warn","ts":"2022-01-25T11:08:08.333Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker
failed","target":"etcd-endpoints://0xc000489880/etcd-a.internal.debug.k8s.ivx.cloud:4001","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last c
onnection error: connection error: desc = \"transport: Error while dialing dial tcp 172.21.23.174:4001: connect: connection refused\""}
2022-01-25T11:08:08.333831093Z stdout F W0125 11:08:08.333587    3574 controller.go:743] health-check unable to reach member 8980394960429229411 on [https://etcd-a.internal.debug.k8s
.ivx.cloud:4001]: context deadline exceeded
2022-01-25T11:08:08.334717828Z stdout F I0125 11:08:08.333643    3574 controller.go:300] etcd cluster state: etcdClusterState
2022-01-25T11:08:08.334729469Z stdout F   members:
2022-01-25T11:08:08.334733686Z stdout F     {"name":"etcd-c","peerURLs":["https://etcd-c.internal.debug.k8s.ivx.cloud:2380"],"endpoints":["https://etcd-c.internal.debug.k8s.ivx.cloud
:4001"],"ID":"3032372502317317821"}
2022-01-25T11:08:08.334738143Z stdout F     {"name":"etcd-b","peerURLs":["https://etcd-b.internal.debug.k8s.ivx.cloud:2380"],"endpoints":["https://etcd-b.internal.debug.k8s.ivx.cloud
:4001"],"ID":"8478051572514898233"}
2022-01-25T11:08:08.334741659Z stdout F     {"name":"etcd-a","peerURLs":["https://etcd-a.internal.debug.k8s.ivx.cloud:2380"],"endpoints":["https://etcd-a.internal.debug.k8s.ivx.cloud
:4001"],"ID":"8980394960429229411"}
2022-01-25T11:08:08.334745367Z stdout F       NOT HEALTHY
2022-01-25T11:08:08.33474875Z stdout F   peers:
2022-01-25T11:08:08.334755305Z stdout F     etcdClusterPeerInfo{peer=peer{id:"etcd-c" endpoints:"172.21.25.52:3996" }, info=cluster_name:"etcd" node_configuration:<name:"etcd-c" peer
_urls:"https://etcd-c.internal.debug.k8s.ivx.cloud:2380" client_urls:"https://etcd-c.internal.debug.k8s.ivx.cloud:4001" quarantined_client_urls:"https://etcd-c.internal.debug.k8s.ivx
.cloud:3994" > etcd_state:<cluster:<cluster_token:"pgEBY47MpwnF001Y5Q6mzg" nodes:<name:"etcd-b" peer_urls:"https://etcd-b.internal.debug.k8s.ivx.cloud:2380" client_urls:"https://etcd
-b.internal.debug.k8s.ivx.cloud:4001" quarantined_client_urls:"https://etcd-b.internal.debug.k8s.ivx.cloud:3994" tls_enabled:true > nodes:<name:"etcd-c" peer_urls:"https://etcd-c.int
ernal.debug.k8s.ivx.cloud:2380" client_urls:"https://etcd-c.internal.debug.k8s.ivx.cloud:4001" quarantined_client_urls:"https://etcd-c.internal.debug.k8s.ivx.cloud:3994" tls_enabled:
true > nodes:<name:"etcd-a" peer_urls:"https://etcd-a.internal.debug.k8s.ivx.cloud:2380" client_urls:"https://etcd-a.internal.debug.k8s.ivx.cloud:4001" quarantined_client_urls:"https
://etcd-a.internal.debug.k8s.ivx.cloud:3994" tls_enabled:true > > etcd_version:"3.5.0" > }
2022-01-25T11:08:09.917318971Z stdout F I0125 11:08:09.917201    3574 controller.go:441] mismatched version for peer peer{id:"etcd-c" endpoints:"172.21.25.52:3996" }: want "3.5.1", h
ave "3.5.0"
2022-01-25T11:08:09.917324874Z stdout F I0125 11:08:09.917265    3574 controller.go:441] mismatched version for peer peer{id:"etcd-a" endpoints:"172.21.23.174:3996" }: want "3.5.1",
have "3.5.0"
2022-01-25T11:08:09.917401417Z stdout F I0125 11:08:09.917285    3574 controller.go:441] mismatched version for peer peer{id:"etcd-b" endpoints:"172.21.24.170:3996" }: want "3.5.1",
have "3.5.0"
2022-01-25T11:08:09.917407237Z stdout F I0125 11:08:09.917357    3574 controller.go:513] etcd has unhealthy members, but no idle peers ready to join, so won't remove unhealthy member
s
2022-01-25T11:08:09.917486753Z stdout F I0125 11:08:09.917387    3574 controller.go:541] detected that we need to upgrade/downgrade etcd
2022-01-25T11:08:09.917493237Z stdout F I0125 11:08:09.917393    3574 upgrade.go:139] doing in-place upgrade to "3.5.1"
2022-01-25T11:08:09.917504483Z stdout F W0125 11:08:09.917402    3574 controller.go:163] unexpected error running etcd cluster reconciliation loop: cannot upgrade/downgrade cluster w
hen not all members are healthy

@olemarkus
Copy link
Member

Yeah. That PR means 3.5.1 will be used for upgrading 3.5 clusters. So weird it would want to use 3.5.0 for something. Unless kops sets this somewhere

@zetaab
Copy link
Member

zetaab commented Jan 25, 2022

@olemarkus I am hitting this same issue with kops 1.22.3. It looks like I had 3.5.0 etcd cluster installed and now it tries to use 3.5.1 but fails.

@zetaab
Copy link
Member

zetaab commented Jan 25, 2022

cannot use 2022-01-25T17:13:24.238674833Z stdout F W0125 17:13:24.238586 5215 etcdserver.go:118] error running etcd: unknown etcd version v3.5.0: not found in [/opt/etcd-v3.5.0-linux-amd64] either

I am now using following to override etcd-manager image

spec:
  ...
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-us-east-1d
      name: d
    - instanceGroup: master-us-east-1e
      name: e
    - instanceGroup: master-us-east-1f
      name: f
    manager:
      image: k8s.gcr.io/etcdadm/etcd-manager:v3.0.20211117
      logLevel: 3
    name: main
    version: 3.5.0
  - etcdMembers:
    - instanceGroup: master-us-east-1d
      name: d
    - instanceGroup: master-us-east-1e
      name: e
    - instanceGroup: master-us-east-1f
      name: f
    manager:
      image: k8s.gcr.io/etcdadm/etcd-manager:v3.0.20211117
      logLevel: 3
    name: events
    version: 3.5.0

it looks like image k8s.gcr.io/etcdadm/etcd-manager:v3.0.20211124 and version 3.5.1 is somehow broken.

edit: can confirm that modifying etcd manager image and defining etcd version works.

@olemarkus
Copy link
Member

If you have a broken cluster, try setting etcd manager to docker.io/olemarkus/etcd-manager:fix-13118 and see if that fixes the issue.

Also see kubernetes-sigs/etcdadm#279

@olemarkus
Copy link
Member

/kind blocks-next
/kind office-hours

@k8s-ci-robot
Copy link
Contributor

@olemarkus: The label(s) kind/blocks-next cannot be applied, because the repository doesn't have them.

In response to this:

/kind blocks-next
/kind office-hours

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@btalbot
Copy link

btalbot commented Jan 30, 2022

Given that there have been several cluster breaking bugs from kops updates lately, Is it safe to assume that there are no automated testing (or routine manual testing) of kops update cluster before a release is made?

@olemarkus
Copy link
Member

There are many. See https://testgrid.k8s.io/kops-misc

But for various reasons they didn't catch this one. Plan on remedying that.

@zetaab
Copy link
Member

zetaab commented Jan 30, 2022

@btalbot also one problem is that there can be multiple ways to use kops. You can configure lots of different things and those are not all tested automatically(no sense to run 1000 different tests with different combinations).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocks-next kind/bug Categorizes issue or PR as related to a bug. kind/office-hours
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants